The increasing amount of text data in the digital age calls for methods to reduce reading time while maintaining information content. The process of summarization achieves this by deleting, generalizing or paraphrasing fragments of the input text. Summarization methods can be categorized into single or multi document and extractive or abstractive approaches. In contrast to single document Rush et al. (2015), the multi document setup can leverage the fact that in some domains like news articles there are different sources describing the same event Banerjee et al. (2016); Haghighi and Vanderwende (2009). Extractive methods solely rely on the words of the input and e.g. extract whole sentences Erkan and Radev (2004); Parveen and Strube (2015) or recombine phrases on the sentences level Banerjee et al. (2016). Abstractive approaches on the other hand are rarely bound to any constraints and gained a lot of traction due to recent advances in machine translation like the encoder-decoder framework Sutskever et al. (2014); Paulus et al. (2017) or attention mechanism Bahdanau et al. (2014); Rush et al. (2015); Paulus et al. (2017). Another more general distinction is the need of supervision. Supervised methods require training pairs of input text and output summarization Paulus et al. (2017); Rush et al. (2015), whereas unsupervised methods abuse inherent properties of the input like frequency of phrases Banerjee et al. (2016) or centrality Erkan and Radev (2004). In this work we use a Variational Autoencoder (VAE) Kingma and Welling (2013); Bowman et al. (2016) and control the decoding length Kikuchi et al. (2016) to obtain a shortened version of an input sentence. VAEs work unsupervised and decoding makes use of the whole available vocabulary. This work is organized into following sections. At first we give background about used technologies and concepts. In 3 we describe the architecture of our model. The data we use for the experiments in section 5 is outlined in section 4. At last we report the results in section 1.
2.1 Variational Autoencoder
Variational Autoencoder (VAE) is a generative model firstly introduces by Kingma and Welling (2013). Like regular autoencoders VAEs learn a mapping from high dimensional input to a low dimensional latent variable . Instead of doing this in a deterministic way VAE imposes a prior distribution on , e.g. standard Gaussian:
The desired effect is that each area in the space gets a semantic meaning and thus samples from can be decoded in a meaningful way. The decoder is trained to reconstruct the input based on the latent variable . In order to approximate via gradient descent the reparameterization trick Kingma and Welling (2013) was introduced. This trick allows the gradient to flow through the sampling decision of (Formula 1) by outsourcing the discrete operation. Let and be deterministic outputs of the encoder :
and is the element-wise product. To prevent the model pushing close to and basically fall back to a regular autoencoder the objective is extended by the Kullback-Leibler (KL) divergence between prior and :
The goal is to have a non-zero, but not out of control KL term while maintaining a reasonable reconstruction loss. This guarantees a semantically rich latent variable and good generation ability.
2.2 Controlling Output Length
There are different methods for controlling the output length in an encoder-decoder model. One of them is LenEmb Kikuchi et al. (2016) where the decoder is fed information about the remaining length at every decoding step . This information is encoded as an embedding matrix accessed by and learned during training. Instead of calculating the remaining length as bytes we use a more straight forward approach by counting whole words. At each decoding step the length embedding is concatenated to the input and chosen as follows:
where is the desired length. This encourages the decoder to fit the information left into the remaining words. The authors show in a supervised summarization setup that setting to the desired number of output bytes, conveniently the 75 bytes of the references, yield better performance during evaluation.
In order to apply the VAE principle to text data, Bowman et al. (2016)
employ RNNs as encoder and decoder. The vectorsand are constructed from the last hidden state of the encoder and the first cell state of the decoder is initialized as . Since then many improvements of this basic architecture have been published and are adopted in this work. First of all we use a bidirectional encoder which reads forward and backward through the input sequence . At each encoding step the forward and backward hidden states and are concatenated to . Vani and Birodkar (2016) then calculate and from the mean of all hidden states , arguing that this produces a better sequence representation and the gradient reaches every input vector more easily. This is depicted in Figure 1. Besides the reconstruction loss of the input sequence Zhao et al. (2017) introduce a so called bag-of-words loss. A dimensional vector is predicted by a feed-forward layer which takes as input, where is the vocabulary size. This vector is compared against the label which is the one-hot representation of the input sentence. This forces the model to put more general information into the latent variable instead of encoding the start of a sentence and derive the rest by memorizing word order in the decoder. As seen in Figure 2 the multi-layer RNN gets fed the latent variable at every decoding step, again allowing to have an easier way for the gradient to flow back. Additionally the last emitted word and the length embedding, see 2.2, are concatenated to the input. To speed up the training sampled softmax Jean et al. (2015)estimates the softmax function at each decoding output.
The data setup is similar to Rush et al. (2015). For training they use 4 million pairs of title and first sentence of the article from Gigaword Graff et al. (2003) data set. As we do not need supervision we remove the titles and due to resource limitations remove all sentences with more than 30 words. The remaining 1.8 million training sentences are preprocessed by lower-casing and tokenizing all words. Additionally numbers are replaced by # and words not in the top 40000 are replaced by UNK token. For evaluation we also use the around 2000 held-out article-title pairs from Gigaword and the DUC-2004 set Over et al. (2007). This consist of 500 news articles from New York Times and Associated Press Wire service and comes with 4 different reference summaries (capped at 75 bytes) written by humans.
We train the proposed model on the above presented data by maximizing the objective in Formula 3. To obtain a shortened version of the input sentence during testing we set to the desired length. Our assumption is that the decoder tries to fit all the information present in the latent variable into the limited output words. Doing so by skipping meaningless words or rephrasing semantic bits to fewer tokens. All under observation of the implicit language model ensuring a grammatically correct sentence.
We use Prefix as baseline which cuts the first 75 characters from the input sentence as summarization. This simple baseline shows to what extent out model is able to pass the information of the input sentence trough the low dimensional latent variable.
5.2 Training Details
Similar to Bowman et al. (2016) a weight for the KL term in the objective function is annealed from 0 to 1 during training. This hinders the model to go the easy way and set the KL term to by letting be equal to . This would mean there is no information encoded in and degenerate the VAE to a regular language model. Another technique to overcome this is dropping the previous emitted word during decoding, relying the decoder further on the latent variable.
and sampled softmax draws 1000 words. Beam search size is set to 100 and batch size to 512. The number of desired output words is set to 20 to reliable reach the 75 bytes of the reference summarizations. All other hyperparameter are searched by Bayesian optimization111https://scikit-optimize.github.io/. Encoder and decoder RNN cell size is 243. Word embedding size is 254 and the latent variable has 124 dimensions. A 236 wide hidden layer predicts
. The best size for length embeddings is found to be 50. Words are not dropped during decoding by a probability of 0.20 and the output layer of RNN cells is regularized by a dropout keep rate of 0.87.
|no len limit||14.49||2.06||12.28||19.91||4.14||18.02||51|
6.1 Evaluation Metric
ROUGE Lin (2004)
is an n-gram based evaluation metric to quantify the quality of a summary relative to given references. We report results on ROUGE-1 and ROUGE-2 which basically count the uni- and bi-gram overlap. Furthermore ROUGE-L score is based on the longest common subsequence (LCS) between the given texts. ROUGE is just an indicator if a automatically generated summary is as good as a human-written reference and should be handled with caution.
6.2 Quantitative Evaluation
Before discussing the summarization results we take a look at how the LenEmb effects the model. In Figure 4 and 5 we see the output length of the model without length restrictions and the one with a desired length of 20 words. Figure 4 is about the same distribution as the input sentences. Figure 5 proofs that we are able to reduce the output length near the desired 75 characters. In fact 20 words are chosen to have the majority slightly above 75 characters to not waste word space during ROUGE evaluation. We perform another analysis to study the effect of LenEmb. We train a model with explicitly providing the information about the sentence length via LenEmb and one without this extension. This means the model has to somehow encode the length information into the latent variable to reproduce the input sentence with minimal loss. In Table 2 we see the
results of a Linear Regression (LR) trained on the latent variables of both models with the objective to predict the length of the encoded sentence. For the model without explicit length information LR can better predict the length of the encoded sentence with only looking at the latent variable. With less length information stored in the latent variable it should be easier to influence the model to produce a certain output length.
The ROUGE scores are found in Table 1. Our model is not able to beat the Prefix baseline. This however could be the effect of the VAE not being able to restore the correct input sentence. We verify this by testing a vanilla VAE model on solely reconstructing the input sentence and see that a lot of mistakes are made. One reason is the lack of attention, which can’t be used in a VAE setting, to ’copy’ rare words from the input. Our LenEmb model however is consistently better than the vanilla VAE, which shows that the reducing of output length can fit more information into the first 75 characters. If we could improve the vanilla VAE to reproduce the input sentence without making a lot mistakes and the LenEmb model maintains the performance gain over the vanilla VAE, we could beat the Prefix baseline. The grammatical quality of the generated sentences was not evaluated.
We extended a VAE with LenEmb to control the length of the produced sentences. The hypotheses that stimulating the decoder to produce shorter outputs will result in more information expressed in fewer words could be verified in a summarization experiment. However a simple baseline could not be beaten with this approach. A reason and subject to further research is how the vanilla VAE can be improved to better reconstruct the input sentence and how this influences the LenEmb extended model. A Linear Regression experiment demonstrated that the length of the input sentence is encoded in the latent variable. All in all this is a reasonable approach to construct a unsupervised abstractive sentence summarization model and worth further investigation.
- Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.
- Banerjee et al. (2016) Siddhartha Banerjee, Prasenjit Mitra, and Kazunari Sugiyama. 2016. Multi-document abstractive summarization using ILP based multi-sentence compression. CoRR, abs/1609.07034.
- Bowman et al. (2016) Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21, Berlin, Germany. Association for Computational Linguistics.
- Erkan and Radev (2004) Günes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. J. Artif. Int. Res., 22(1):457–479.
- Graff et al. (2003) David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2003. English gigaword. Linguistic Data Consortium, Philadelphia, 4:1.
- Haghighi and Vanderwende (2009) Aria Haghighi and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL ’09, pages 362–370, Stroudsburg, PA, USA. Association for Computational Linguistics.
- Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735–1780.
Jean et al. (2015)
Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015.
On using very large
target vocabulary for neural machine translation.
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1–10, Beijing, China. Association for Computational Linguistics.
- Kikuchi et al. (2016) Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. 2016. Controlling output length in neural encoder-decoders. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1328–1338, Austin, Texas. Association for Computational Linguistics.
- Kingma and Ba (2014) Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.
- Kingma and Welling (2013) Diederik P. Kingma and Max Welling. 2013. Auto-encoding variational bayes. CoRR, abs/1312.6114.
- Lin (2004) Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81, Barcelona, Spain. Association for Computational Linguistics.
- Over et al. (2007) Paul Over, Hoa Dang, and Donna Harman. 2007. Duc in context. Inf. Process. Manage., 43(6):1506–1520.
- Parveen and Strube (2015) Daraksha Parveen and Michael Strube. 2015. Integrating importance, non-redundancy and coherence in graph-based extractive summarization.
- Paulus et al. (2017) Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. CoRR, abs/1705.04304.
- Rush et al. (2015) Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389, Lisbon, Portugal. Association for Computational Linguistics.
- Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS’14, pages 3104–3112, Cambridge, MA, USA. MIT Press.
- Vani and Birodkar (2016) Ankit Vani and Vighnesh Birodkar. 2016. Challenges with variational autoencoders for text.
- Zhao et al. (2017) Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 654–664. Association for Computational Linguistics.