End-to-end speech translation (that is, the direct translation of an audio signal without intermediate transcription steps) has recently gained increasing interest in the scientific community thanks to the recent advances of neural approaches in the related ASR and MT fields [1, 2, 3, 4, 5]. Effective approaches to the task can become a useful solution to deal with languages that do not have a formal writing system 
, as it is possible to create a collection of spoken utterances with their respective translations in a more common language. We can also expect that, in the future, end-to-end speech translation systems will overcome problems related to the cumulative effect of speech recognition errors introduced in pipelined architectures. FBK’s submission to the IWSLT 2018 Speech Translation (ST) task relies on a single model that takes as input features extracted from an English audio signal and returns as output a written translation in German. As the input is not in raw wave form, one might argue that the “end-to-end” denomination does not fit in this formulation of the task. Nevertheless, since feeding the network with the input features released by the task organizers was allowed, we adhere to the looser definition of “end-to-end” implicit in this year’s task formulation.111Our work has been pursued during a summer project with the goal of gaining hands-on expertise in this new promising field with the simplification of a standardized data set.
Our system was trained using the state-of-the-art sequence-to-sequence model based on LSTMs and CNNs introduced in 
. Considering the high number of experiments to run, and the high number of epochs needed to train a speech translation model (up toin the case of our final submission), the model was implemented using the fairseq222http://github.com/facebookresearch/fairseq  sequence-to-sequence learning toolkit from Facebook AI Research. The tool, which is tailored to NMT, was adapted to the ST task showing considerable reductions in training time compared to the same models implemented on other platforms (from hours to minutes in the processing of the same amount of training instances).
One of the main challenges we faced was how to maximize the usefulness of the available training data by weeding out noisy (and potentially harmful) instances. For this purpose, we developed the two data cleaning procedures described in Section 2. The architectural choices and the main implementation details of our system are described in Section 3. In Section 4, we report the results on our validation set, which were obtained by using different data conditions and hyper-parameters. Section 5 concludes the paper with final remarks.
2 Data Cleaning
Our submission was obtained by a model solely trained with the data released for the speech translation task. Before building the model, we devoted particular attention to the quality of the training material, aiming to reduce the possible impact that noise in the data can have on training time and model convergence. Indeed, the initial training set of
instances comprised elements featuring either a partial alignment between the audio signal and the corresponding transcription, or a skewed ratio between the number of feature frames and the characters in the transcription. To identify and weed out such noisy and potentially harmful training items, we applied two cleaning procedures. Both the procedures take advantage of the available English transcriptions of the audio signals333Note that data cleaning is the only phase in which we used the English transcriptions. Being this step independent from the actual system training, our approach is still fully end to end. and were run in cascade, after the removal of items to be used as our development set. As discussed in Section 4.1, though smaller in size, the resulting subsets of the original training corpus yielded performance improvements on development data, especially when used for fine-tuning a model trained on the original unfiltered corpus.
2.1 Cleaning Based on Alignment
Starting from the initial training corpus of instances (called “Parallel” henceforth), the first cleaning step was aimed to identify and remove the items featuring a poor alignment between the audio signal and the text. Assuming that the English and German texts are parallel, the potential noise introduced by such instances is represented by wrong transcriptions/translations (either totally inadequate or containing spurious words) of the original source signal. To identify them, our approach was to align each audio signal with the corresponding English transcription and then decide what to retain based on the alignment quality (i.e. considering unaligned words as evidence of noise). We performed the alignment on a sentence-by-sentence basis using Gentle,444https://lowerquality.com/gentle/ a forced aligner based on Kaldi.555http://kaldi-asr.org/index.html After the alignment, we removed all the training instances in which at least one word in the transcription was not aligned with the corresponding audio segment. This strict cleaning policy (due to time limitations, we did not experiment with less aggressive strategies) resulted in the removal of instances, which reduced the initial “Parallel” corpus to items. Henceforth, the corpus resulting from this first cleaning step will be referred to as “Clean 1”.
2.2 Cleaning Based on Frames/Characters Ratio
The second cleaning step was aimed to identify and remove from “Clean 1” the training instances featuring a skewed ratio between the number of feature frames and the characters in the transcription. In this case, the potential noise is due to portions of the original speech that correspond to long silences, background noise (e.g. laughter and applause), or words that are not present in the transcription/translation. To identify such possible outliers, looking at the ratios reported in Figure1, we decided to cut the distribution so to retain only the training instances belonging to ratio bins that contain at least items. The corresponding cutting values of and resulted in the removal of instances, which further reduced our training corpus to items. Henceforth, the corpus resulting from our second cleaning step will be referred to as “Clean 2”.
3 Seq2seq Speech Translation model
. The source-side input length is some order of magnitudes higher than the decoder side, and thus some reduction in the temporal dimension was performed using 2-D CNNs with stride (2, 2). The decoder is inspired by the early deep-transition decoder used in Nematus, which stacks two LSTM units in a way that the single LSTMs are not recurrent by themselves, while the stack of the two is globally recurrent. A schema of the model is depicted in Figure 2.
The input to the encoder is a variable-length audio sequence with features for each time step. At first, the input sequence is processed by two time-distributed densely-connected layers with size of and respectively, each followed by a tanh activation. The output of the densely-connected layers is then processed by two stacked 2-dimensional convolutional layers, each having a kernel and stride . Let be the sequence length and be the number of input features to the first convolutional layer. The output of the first convolution is of size and for the second convolution is of size . The filters are then flattened to obtain an output of size , which is subsequently processed by a stack of three bidirectional LSTM layers 
. The initial state of the LSTM is initialized as a zero vector at the beginning of the training, but then it is optimized via back-propagation together with the rest of the network. We found that training the initial state gives a boost in performance and speeds up the model convergence.
. The input of the first layer is the character embedding of the last character. The output of the first layer is used as a query vector to compute an attention over the last layer of the encoder. The attention output is then used as input to the second LSTM layer. The hidden and cell states received as input by the two LSTM layers are, for every time step, the last hidden and cell states produced by the other LSTM layer. The last encoder output is averaged over the time dimension and this new tensor is passed as input to two different densely-connected layers withtanh non-linearity. The two functions compute the initialization of the hidden and cell states for the first LSTM layer. The deep output is a densely-connected nonlinear function, which takes as input the concatenation of the LSTM output, the attention output and the current symbol (character) embedding, and outputs a tensor of size . This tensor is finally multiplied by a second character embedding matrix to compute the scores over the whole vocabulary.
The attention layer computes a distribution of weights that sums up to for the encoder output sequence (soft attention) with no positional information (global attention). The scores for each encoder position are computed according to their relevance with respect to the decoder state. The relevance score is computed using the general attention score proposed in .
3.4 Increased Regularization
Due to the small size of the training data, we found useful to apply some regularization tricks. The first and more common technique is the dropout applied to each layer . Instead of variational dropout 
, we preferred to use the fastest implementation of LSTMs provided by the Pytorch library, which uses regular dropout.
Besides dropout, we applied weight normalization and label smoothing as additional techniques for regularization. Weight normalization  is a simple technique that decomposes the parameter matrices into their magnitude and direction components in order to easily produce a transformation that scales the weights and reduces the gradient covariance to zero. The result is a faster convergence and a limitation of the weight space, which has a regularizing effect.
Label smoothing  smooths the cross-entropy cost function by giving a weight of
to the probability of the correct symbol, andto the sum of the probabilities of the other symbols. Label smoothing makes the model less confident on its predictions, producing a regularizing effect. In NMT, it has been observed that, despite the increased loss and perplexity usually obtained with this technique, the translations are usually better  and end up in improved BLEU  scores.
In this section we summarize the experiments that motivated our choices for the final submission. Since the goal of our participation was to explore the potential of a single end-to-end model that can translate directly from audio signals, we used as training data only the Speech Translation TED Corpus that was released for the task. No pre-training has been performed on different types of data (such a pre-training would in fact rely on ASR data). All our models were trained using the Adam optimizer  with learning rate of , and values for and of and . We applied dropout of to all layers, including the input features. The norm of the gradients was clipped to . All the models have been trained until convergence according to the loss on a held-out set of sentences (see Section 2). The results achieved by each model on the validation set are reported Tables 1–4.
At first, we experimented with the reference implementation of the sequence-to-sequence model666https://github.com/eske/seq2seq
that is based on Tensorflow. However, with about hours per epoch on a single NVIDIA GTX-1080 GPU, its training time resulted to be incompatible with the need of quickly testing a range of alternative solutions. To avoid this bottleneck, we re-implemented the same model within the fairseq toolkit, which is highly optimized to significantly reduce training time. Our re-implementation was indeed faster, with a reduction of the training time to about minutes per epoch for the largest version of the training corpus (“Parallel”), and about minutes per epoch for the smallest one (“Clean 2”). The wall clock time of a single training run was around hours, with a maximum of additional hours for the fine-tuning.
4.1 Dataset Selection
In the first round of experiments, we were interested in understanding the impact of the data cleaning procedures described in Section 2. To this aim, we trained the base system on the three different versions of the dataset (i.e. “Parallel”, “Clean 1” and “Clean 2”) and evaluated the resulting models on the same validation set. The results listed in Table 1 show that Clean 1 provides us with the best result, but Clean 2 leads to a result equivalent to Parallel despite using about less data. Thus, we decided to use Clean 2 for the following experiments in order to have faster training cycles.
4.2 Dataset Fine-tuning and Restart Strategy
In this subsection we address two questions. The first one is whether it is useful to fine-tune a model trained on a larger dataset by using a smaller and cleaner subset of the same corpus. The second question is whether a restart strategy with learning rate annealing can improve the performance.
The first question was addressed by restarting the training of the model by using the new, smaller dataset as training set, but with the same training policy and hyper parameters. The results listed in Table (a)a show that fine-tuning the model on cleaner data always helps. In particular, fine-tuning on Clean 2 (which is smaller but of higher quality) always results in better performance, especially in the case of a double step of fine-tuning (P C1 C2). Interestingly, also using only the clean data (C1 C2) yields better results than training the initial model on the original Parallel corpus.
To address the second question, we used the model trained on Clean 2 and restarted the training on the same training set with a policy of learning rate annealing. To this aim, the learning rate was multiplied by every time the validation loss did not improve over the best one computed so far . We experimented using both Adam with annealing and SGD with Nesterov Accelerated Gradient (NAG)  with annealing. The results listed in table (b)b show that, though Adam with annealing yields a better model, both the BLEU scores are at least points less than the worse model with fine-tuning.
|+ Weight Normalization (WN)||8.69|
|+ Label Smoothing (LS)||8.74|
|+ Sigmoid Attention||8.44|
|+ WN and LS||9.69|
4.3 Features Exploration
In this round of experiments we trained our base model on the Clean 2 dataset and compared its result with models that have weight normalization, label smoothing, sigmoidal attention instead of softmax attention, and weight normalization and label smoothing together. The results on the validation set, which are listed in Table 3, show that both weight normalization and label smoothing give a small contribution, while the sigmoidal attention slightly decreases the translation quality. Moreover, the joint addition of label smoothing and weight normalization gives a sensibly higher boost, suggesting that the models need high regularization. Considering the scarce amount of data, the need for high regularization was expected. However, it is interesting to note that by increasing the dropout to the base model converges to a much worse point.777Observed in preliminary experiments, not reported here. From now on, we call the model with weight normalization and label smoothing “full modell”.
4.4 Experiments with Full Model
Once we found that the full model is clearly better than the others, we replicated the experiments on all the datasets with the new model. In the second column of Table (a)a, we can see that this model is more sensitive to noise. In fact, training it with the “Parallel” set leads to poor performance in translation ( BLEU), but this lower translation quality was not expected by looking at only the training and validation losses. Nonetheless, the fine-tuning of this model on cleaner data, whose results are listed in table (b)b, leads to improvements ranging from to BLEU points with respect to the models trained only on the clean data.
Unfortunately, the score of of the best model (PC2) represents only a limited improvement when compared with the best model in the second column of Table (a)a (PC1C2), which improved from of the base model to . The fifth row of Table (b)b shows the results when the last fine-tuning is performed using Adam with annealing instead of Adam with a fixed learning rate. Based on these results, we submitted our single best model (P C2 Avg) as our contrastive submission.
4.5 Checkpoint Averaging and Ensemble Decoding
Checkpoint averaging consists in computing the average of different checkpoints of the same training. In 
, it has been shown that, in neural machine translation, it leads to a better translation quality than using a single model. For each model, we computed the BLEU score on the validation set for the lastcheckpoints, and averaged the weights of all the models whose results are less than BLEU points worse than the best one. The improvement can be observed by comparing the Best and Avg columns of Table (b)b.
We also performed ensemble decoding of models trained in different runs. The ensemble involved all the Avg checkpoints listed in table (b)b, except for “C1 C2”, which was trained using a different vocabulary. The ensemble of the models obtained a result of BLEU on the validation set.
4.6 Submitted Systems and Results
Based on the outcomes of the above experiments on development data, we opted for submitting the following systems:
Primary: ensemble of 4 systems (Section 4.5).
Contrastive: Checkpoint averaging of PC2 (Table (b)b).
The result of the primary system is BLEU score on our validation set and on the test set, whereas the contrastive system scored, respectively, and in the validation and the test set.
We described FBK’s participation in the end-to-end speech translation task at IWSLT 2018. We have shown that data cleaning is useful in reducing the training time by discarding a good portion of the training data, while not hurting translation quality. We have also observed that fine-tuning a model using a cleaner dataset can bring improvements up to BLEU points. Moreover, regularizing the model with normalization and label smoothing can produce an improvement of more than BLEU point with clean datasets, but the same model fails to converge to a good point using all the data. In addition, using checkpoint averaging and ensemble decoding gave us another gain of BLEU point. The final score on this year’s test set is of and BLEU respectively for our best single model and for the primary submission based on ensemble decoding. In order to improve the competitiveness of this system, our next experiments will include ASR for pretraining the encoder  or for multi-task learning .
We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPUs used for this research.
-  A. Bérard, O. Pietquin, C. Servan, and L. Besacier, “Listen and translate: A proof of concept for end-to-end speech-to-text translation,” arXiv preprint arXiv:1612.01744, 2016.
-  A. Bérard, L. Besacier, A. C. Kocabiyikoglu, and O. Pietquin, “End-to-end automatic speech translation of audiobooks,” in ICASSP 2018-IEEE International Conference on Acoustics, Speech and Signal Processing, 2018.
-  A. Anastasopoulos and D. Chiang, “Tied multitask learning for neural speech translation,” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), vol. 1, 2018, pp. 82–91.
-  R. J. Weiss, J. Chorowski, N. Jaitly, Y. Wu, and Z. Chen, “Sequence-to-sequence models can directly translate foreign speech,” 2017.
-  A. Anastasopoulos and D. Chiang, “Leveraging translations for speech transcription in low-resource settings,” in Proceedings of Interspeech 2018, 2018.
L. Duong, A. Anastasopoulos, D. Chiang, S. Bird, and T. Cohn, “An attentional model for speech translation without transcription,” inProceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016, pp. 949–959.
J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin, “Convolutional
sequence to sequence learning,” in
International Conference on Machine Learning, 2017, pp. 1243–1252.
S. Hochreiter and J. Schmidhuber, “Long short-term memory,”Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
-  R. Sennrich, O. Firat, K. Cho, A. Birch, B. Haddow, J. Hitschler, M. Junczys-Dowmunt, S. Läubli, A. V. M. Barone, J. Mokry, et al., “Nematus: a toolkit for neural machine translation,” in Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics, 2017, pp. 65–68.
M. Schuster and K. K. Paliwal, “Bidirectional recurrent neural networks,”IEEE Transactions on Signal Processing, vol. 45, no. 11, pp. 2673–2681, 1997.
-  R. Pascanu, T. Mikolov, and Y. Bengio, “On the difficulty of training recurrent neural networks,” in International Conference on Machine Learning, 2013, pp. 1310–1318.
T. Luong, H. Pham, and C. D. Manning, “Effective approaches to attention-based
neural machine translation,” in
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015, pp. 1412–1421.
-  Y. Gal and Z. Ghahramani, “A theoretically grounded application of dropout in recurrent neural networks,” in Advances in neural information processing systems, 2016, pp. 1019–1027.
-  D. P. Kingma, T. Salimans, and M. Welling, “Variational dropout and the local reparameterization trick,” in Advances in Neural Information Processing Systems, 2015, pp. 2575–2583.
-  T. Salimans and D. P. Kingma, “Weight normalization: A simple reparameterization to accelerate training of deep neural networks,” in Advances in Neural Information Processing Systems, 2016, pp. 901–909.
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2818–2826.
-  M. X. Chen, O. Firat, A. Bapna, M. Johnson, W. Macherey, G. Foster, L. Jones, N. Parmar, M. Schuster, Z. Chen, et al., “The best of both worlds: Combining recent advances in neural machine translation,” arXiv preprint arXiv:1804.09849, 2018.
-  K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “Bleu: a method for automatic evaluation of machine translation,” in Proceedings of the 40th annual meeting on association for computational linguistics, 2002, pp. 311–318.
-  D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proc. of ICLR 2015, 2015.
-  M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al., “Tensorflow: a system for large-scale machine learning.”
-  P. Bahar, T. Alkhouli, J.-T. Peter, C. J.-S. Brix, and H. Ney, “Empirical investigation of optimization algorithms in neural machine translation,” The Prague Bulletin of Mathematical Linguistics, vol. 108, no. 1, pp. 13–25, 2017.
I. Sutskever, J. Martens, G. Dahl, and G. Hinton, “On the importance of initialization and momentum in deep learning,” inInternational conference on machine learning, 2013, pp. 1139–1147.
-  M. Junczys-Dowmunt, T. Dwojak, and H. Hoang, “Is neural machine translation ready for deployment? a case study on 30 translation directions.”
-  S. Bansal, H. Kamper, K. Livescu, A. Lopez, and S. Goldwater, “Pre-training on high-resource speech recognition improves low-resource speech-to-text translation,” arXiv preprint arXiv:1809.01431, 2018.