Automatic Generation of Chinese Short Product Titles for Mobile Display

03/30/2018 ∙ by Yu Gong, et al. ∙ Shanghai Jiao Tong University 0

This paper studies the problem of automatically extracting a short title from a manually written longer description of E-commerce products for display on mobile devices. It is a new extractive summarization problem on short text inputs, for which we pro- pose a feature-enriched network model, combining three different categories of features in parallel. Experimental results show that our framework significantly outperforms several baselines by a substantial gain of 4.5 E-commerce short texts and will release it to the research community.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Mobile Internet is fast becoming the primary venue for E-commerce. People have got used to browsing through collections of products and making transactions on the relatively small mobile phone screens. All major E-commerce giants such as Amazon, eBay and Taobao offer mobile apps that are poised to supersede the conventional websites.

Figure 1: A cut-off long title on an E-commerce mobile app, vs. a corresponding short title.

When a product is featured on an E-commerce website or mobile app, it is often associated with a textual title which describes the key characteristics of the product. These titles, written by merchants, often contain gory details, so as to maximize the chances of being retrieved by user search queries. Therefore, such titles are often verbose, over-informative, and hardly readable. While this is okay for display on a computer’s web browser, it becomes a problem when such longish titles are displayed on mobile apps. Take Figure 1 as an example. The title for a red sweater on an E-commerce mobile app is “ONE MORE文墨2017夏装新款印花连帽上衣长袖短款喇叭袖百搭卫衣女”. Due to the limited display space on mobile phones, original long titles (usually more than 20 characters) will be cut off, leaving only the first several characters “ONEMORE文墨2017夏装… (ONE MORE 2017 summer woman…)” on the screen, which is completely incomprehensible, unless the user clicks on the product and load the detailed product page.

Thus, in order to properly display product listing on a mobile screen, one has to significantly simplify (e.g., to under 10 characters) the long titles while keeping the most important information. This way, user only has to glance through the search result page to make quick decision whether they want to click into a particular product. Figure 1 also shows as comparison an alternate display of a shortened title for the same product. The short title in the left snapshot is “印花连帽短款喇叭袖卫衣”, which means “printedhoodyshortflare sleevesweater”.

In this paper, we attempt to extract short titles from their longer, more verbose counterparts for E-commerce products. To the best of our knowledge, this is the first attempt that attacks the E-commerce product short title extraction problem.

This problem is related to text summarization, which generates a summary by either

extracting or abstracting words or sentences from the input text. Existing summarization methods have primarily been applied to news or other long documents, which may contain irrelevant information. Thus, the goal of traditional summarization is to identify the most essential information in the input and condense it into something as fluent and readable as possible.

We would attack our problem with an extractive summarization approach, rather than an abstractive one for these reasons. First, our input title is relatively shorter and contains less noise, (average 27 characters; see Table 1). Some words in the long title may not be important but they are all relevant to the product. Thus, it is sufficient to decide if each word should or should not stay in the summary. Second, the number of words in the output is strictly constrained in our problem due to size of the display. Generative (abstractive) approaches do not perform as well when there’s such a constraint. Finally, for E-commerce, it is better for the words in the summary to come from the original title. Using different words may lead to a change of original intention of the merchant.

State-of-the-art neural summarization models [Cheng and Lapata2016, Narayan et al.2017]

are generally based on attentional RNN frameworks and have been applied on news or wiki-like articles. However, in E-commerce, customers are not so sensitive to the order of the words in a product title. Besides using deep RNN with attention mechanism to encode word sequence, we believe other single-word level semantic features such as NER tags and TF-IDF scores will be as just as useful and should be given more weights in the model. In this paper, we propose a feature-enriched neural network model, which is not only deep but also wide, aiming to effectively shorten original long titles.

The contributions of this paper are summarized below:

  • We collect and will open source a product title summary dataset (Section 2).

  • We present a novel feature-enriched network model, combining three different types of word level features (Section 4), and the results show the model outperforms several strong baseline methods, with score of 0.725 (Section 5.4).

  • By deploying the framework on an E-commerce mobile app, we witnessed improved online sales and better turnover conversion rate in the popular 11/11 shopping season (Section 5.5).

2 Data Collection

Figure 2: Procedure of data collection in Youhaohuo.

Publicly available large-scale summarization dataset is rare. Existing document summarization datasets include DUC2111http://duc.nist.gov/data.html, TAC3222http://www.nist.gov/tac/2015/KBP/ and TREC4333http://trec.nist.gov/ for English, and LCSTS444http://icrc.hitsz.edu.cn/Article/show/139.html for Chinese. In this work, we create a dataset on short title extraction for E-commerce products. This dataset comes from a module in Taobao named “有好货”(Youhaohuo)555https://h5.m.taobao.com/lanlan/index.html. Youhaohuo is a collection of high-quality products on Taobao. If you click a product in Youhaohuo, you will be redirected to the detailed product page (including product title). What is different from ordinary Taobao products is that online merchants are required to submit a short title for each Youhaohuo product. This short title, written by humans, is readable and describes the key properties of the product. Furthermore, most of these short titles are directly extracted from the original product titles. Thus, we believe Youhaohuo is a good data source of extractive summarization for product descriptions.

Figure 2 shows how we collected the data. On the left is a web page in Youhaohuo displaying several products, each of which contains an image and a short title below. When clicking on the bottom right dress, we jump to the detailed page on the right. The title next to the picture in red box is the manually written short title, which says “MIUCO针织马甲假两件收腰连衣裙” (MIUCO tight dress with knit vest). This short title is extracted from the long title below in the blue box. Notice that all the characters in the short tile are directly extracted from the long title (red boxes inside blue box). In addition to the characters in the short title, the long title also contains extra information such as “女装2017冬新” (woman’s wear brand new in winter 2017). In this work, we segment the original long titles and short titles into Chinese words by jieba666https://pypi.python/pypi/jieba/.

The dataset consists of 6,481,623 pairs of original and short product titles, which is the largest short text summarization dataset to date. We call it large extractive summary dataset for E-commerce (LESD4EC), whose statistics is shown in Table 1. We believe this dataset will contribute to the future research of short text summarization777The dataset will be published after the paper is accepted. Notice all the original data can be crawled online..

No. of summaries 6,481,623
No. of words per text 12
No. of chars per text 27
No. of words per summary 5
No. of chars per summary 11
Table 1: Statistics of LESD4EC dataset.

3 Problem Definition

In this section we formally define the problem of short title extraction. A char is a single Chinese or English character. A segmented word (or term) is a sequence of several chars such as “Nike” or “牛仔裤”(jean). A product title, denoted as , is a sequence of words . Let be a sequence of labels over , where . The corresponding short title is a subsequence of , denoted as , where and .

We regard short title extraction task as a sequence classification problem. Each word is sequentially visited in the original product title order and a binary decision is made. We do this by scoring each word within and predicting a label , indicating whether the word should or should not be included in the short title . As we apply supervised training, the objective is to maximize the likelihood of all word labels , given the input product title and model parameters :

(1)

4 Feature-Enriched Neural Extractive Model

In this section, we describe our extractive model for product short title extraction. The overall architecture of our neural network based extractive model is shown in Figure 3

. Basically, we use a Recurrent Neural Network (RNN) as the main building block of the sequential classifier. However, unlike traditional RNN-based sequence labeling models used in NER or POS tagging, where all the word level features are fed into RNN cell, we instead divide the features into three parts, namely

Content, Attention and Semantic respectively. Finally we combine all three features in an ensemble.

Figure 3: Architecture of Feature-Enriched Neural Extractive Model.

4.1 Content Feature

To encode the product title, we first look up an embedding matrix to get the word embeddings . Here, denotes the dimension of the embeddings and denotes the vocabulary size of natural language words. Then, the embeddings are fed into a bidirectional LSTM networks. To this end, we get two hidden state sequences, from the forward network and from the backward network. We concatenate the forward hidden state of each word with corresponding backward hidden state, resulting in a representation . At this point, we obtain the representation of the product title .

The content feature of current word is then calculated as:

(2)

where and are model parameters.

4.2 Attention Feature

In order to measure the importance of each word relevant to the whole product title, we borrow the idea of attention mechanism [Bahdanau, Cho, and Bengio2014, Luong, Pham, and Manning2015]

to calculate a relevance score between the hidden vector of current word and representation of the entire title sequence.

The representation of the entire product title is modeled as a non-linear transformation of the average pooling of the concatenated hidden states of the BiLSTM:

(3)

Therefore, the attention feature of current word is calculated by a Bilinear combination function as:

(4)

where is a parameter matrix.

4.3 Semantic Feature

Apart from the two hidden features calculated using an RNN encoder, we design another kind of feature including TF-IDF and NER Tag, to capture the deep semantics of each word in a product title.

Tf-Idf

Tf-idf, short for term frequency–inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in corpus or a sentence in a document.

A simple choice to calculate term frequency of current word is to use the number of its occurrences (or count) in the title:

(5)

In the case of inverse document frequency, we calculate it as:

(6)

where is the number of product titles in the corpus and is the number of titles containing the word .

By combining the above two, the tf-idf score of word in a product title , denoted as , is then calculated as the product of and .

We design a feature vector containing three values: tf score, idf score and tf-idf score and the calculate a third feature:

(7)
(8)

Ner

We use a specialized NER tool for E-commerce to label entities in a product title. In total there are types of entities, which are of common interest in E-commerce scenario, such as “颜色”(color), “风格”(style) and “尺寸规格 ”(size). For example, in segmented product title “包邮 Nike 品牌 的 红色 运动裤” (Nike red sweatpants with free shipping), “包邮 ”(Free shipping) is labeled as Marketing_Service, “Nike” is labeled as Brand, “红色”(Red) is labeled as Color and “运动裤”(Sweatpants) is labeled as Category. We use one-hot representation to encode NER feature of each word and then integrate it into the model by proposing a fourth feature:

(9)

4.4 Ensemble

We combine all the features above into one final score of word :

(10)

where is the sigmoid or logistic function, which restrain the score between and . Based on that, we set a threshold to decide whether we keep word in the short title or not.

Our model is very much like the Wide & Deep Model architecture [Cheng et al.2016]. While the content and attention features are deep since they rely on deep RNN structure, the semantic features are relatively wide and linear.

5 Experiments

In this section, we first introduce the experimental setup and the previous state-of-the-art systems as comparison to our own model known as Feature-Enriched-Net. We then show the implementation details and the evaluation results before giving some discussions on the results.

5.1 Training and Testing Data

We randomly select 500,000 product titles as our training data, and another 50,000 for testing. Each product title is annotated with a sequence of binary label , i.e., each word is labeled with (included in short title) or (not included in short title). Readers may refer to Section 2 for the details about how we collect the product titles and their corresponding short titles.

5.2 Baseline Systems

Since there are no previous work that directly solves the short title extraction for E-commerce product, we select our baselines from three categories. The first one is traditional methods. We choose a keyword extraction framework known as TextRank

[Mihalcea and Tarau2004]. It first infers an importance score for each word within the long title by an algorithm similar to PageRank, then decides whether each word should or should not be kept in the short title according to the scores.

The second category is standard sequence labeling systems. We choose the system mentioned by Huang et al.huang2015bidirectional, in which a multi-layer BiLSTM is used. Compared to our system, it does not exploit the attention mechanism and any side feature information. We substitute the Conditional Random Field (CRF) layer with Logistic Regression to make it compatible with our binary labeling problem. We call this system BiLSTM-Net.

The last category of methods is attention-based frameworks, which use encoder-decoder architecture with attention mechanism. We choose Pointer Network [Vinyals, Fortunato, and Jaitly2015]

as a comparison and call it Pointer-Net. During decoding, it looks at the whole sentence, calculates the attentional distribution and then makes decisions based on the attentional probabilities.

5.3 Implementation Details

We pre-train word embeddings used in our model on the whole product titles data plus an extra corpus called “E-commerce Product Recommended Reason”, which is written by online merchants and is also extracted from YouHaoHuo (Section 2). We use the Word2vec [Mikolov et al.2013a, Mikolov et al.2013b] CBOW model with context window size , negative sampling size , iteration steps and hierarchical softmax . The size of pre-trained word embeddings is set to . For Out-Of-Vocabulary (OOV) words, embeddings are initialized as zero. All embeddings are updated during training.

For recurrent neural network component in our system, we used a two-layers LSTM network with unit size

. All product titles are padded to a maximum sentence length of

.

We perform a mini-batch cross-entropy loss training with a batch size of sentences for

training epochs. We use Adam optimizer, and the learning rate is initialized with

.

5.4 Offline Evaluation

To evaluate the quality of automatically extracted short titles, we used ROUGE [Lin and Hovy2003] to compare model generated short titles to manually-written short titles. In this paper, we only report ROUGE-1, mainly because linguistic fluency and word order are not of concern in this task. Unlike previous works in which ROUGE is a recall-oriented metric, we jointly consider precision, recall and F1 score, since recall only presents the ratio of the number of extracted words included in ground-truth short title over the total number of words in ground-truth short title. However, due to the limited display space on mobile phones, the number of words (or characters) of extracted short title itself should be constrained as well. Thus, precision is also measured in our experiments. And

is considered a comprehensive evaluation metric:

Where is the manually written short title, and is the number of overlapping words appearing in short titles generated by model and humans.

Final results on the test set are shown in Table 2

. We can find that our method (Feature-Enriched-Net) outperforms the baselines on both precision and recall. Our method achieves the best

F1 score, which improves by a relative gain of 4.5%. Pointer-Net would achieve higher precision score for its attentive ability to the whole sentence and select the most important words. Our method considers long-short memories, attention mechanism and other abundant semantic features. That is to say, our model has the ability to extract the most essential words from original long titles and make the short titles more accurate and comprehensive.

We also tune the threshold used in BiLSTM-Net and our Feature-Enriched-Net. This threshold indicates how large the predicted likelihood should be so that a word will be included in the final short title. We reported the results in Figure 4, in which we set as , and . From the figures, we can conclude that our model stably performs better than the other.

Models
TextRank 0.430 0.219 0.290
BiLSTM-Net 0.637 0.751 0.689
Pointer-Net 0.648 0.746 0.694
Feature-Enriched-Net 0.675 0.783 0.725
Table 2: Final results on the test set. We report ROUGE-1 Precision, Recall and corresponding F1. We use the tuned threshold (see Figure 4) for BiLSTM-Net and Feature-Enriched-Net. Best ROUGE score in each column is highlighted in boldface.
Figure 4: Offline results of Bi-LSTM-Net and Feature-Enriched-Net under different thresholds; Online A/B Testing of sales volume and Turnover Conversion Rate

5.5 Online A/B Testing

This subsection presents the results of online evaluation in the search result page scenario of an E-commerce mobile app with a standard A/B testing configuration. Due to the limited display space on mobile phones, only a fixed number of chars (12 characters in this app) can be shown out and excessive part will be cut off. Therefore, unlike previously mentioned inference approach (with threshold in Section 4.4), we regard it as classic Knapsack Problem. Each word in the product title is an item with weight and value , where represents the char length of word and represents the predicted likelihood of word by our model. The maximum weight capacity of our knapsack (also known as char length limit) is . Then the target is:

where means should be reserved in the short title. Similar to the standard solution to 0-1 Knapsack Problem, we use a Dynamic Programming (DP) algorithm.

In our online A/B testing evaluation, 3% of the users were randomly selected as testing group (about 3.4 million user views (UV)), in which we substituted the original cut-off long title displayed to users with extracted short titles by our model with DP inference. We claim that after showing the short titles with most important keywords, users have much better idea what the product is about on the “search result” page and thus find the product they want more easily. During the popular Double 11 shopping season, we deployed A/B testing for 5 days (from 2017-11-02 to 2017-11-06) and achieved on average 2.31% and 1.22% improvements of sales volume and turnover conversion rate (see Figure 4 for each day). This clearly shows that better short product titles are more user-friendly and hence improve the sales substantially.

ORIGINAL xunruo 熏 若 双生 设计师 品牌 泡泡 系列 一 字领 掉 袖 连衣裙 预订 款
(Bookable XunRuo twin designer brand bubble series dress with boat neckline and off sleeves.)
HUMAN 一字领 掉 袖 连衣裙
(Dress with boat neckline and off sleeves.)
Feature-Enriched-Net 泡泡 系列 一字领 掉 袖 连衣裙
(Bubble series dress with boat neckline and off sleeves.)
BiLSTM-Net 熏 若 品牌 掉 袖 连衣裙 预订
(Bookable XunRuo brand dress with off sleeves.)
Table 3: A real experimental case with 12-chars length limit. Pointer-Net is not included since encoder-decoder architecture can’t directly adapt to character length limit

5.6 Discussions

In Table 3, we show a real case of original long title, along with short title annotated by human beings, predicted by BiLSTM-Net and Feature-Enriched-Net respectively. From the human annotated short title, we find that a proper short title should contain the most important elements of the product, such as category (“dress”) and description of properties (“boat neckline and off sleeves”). While some other elements such as brand terms (“XunRuo twin designer brand”) or service terms (“open for reservation”) should not be kept in the short title. Our Feature-Enriched Net has the ability to generate a satisfying short title, while baseline model tends to miss some essential information.

However, there is still room for improvement. Terms with similar meaning may co-occur in the short title generated by our model, when they all happen to be important terms such as a category. For example, “皮衣” and “皮夹克” both mean “jacket” in a long title, and the model tends to keep both of them. However, only one of them is enough and the space saved can be used to display other useful information to customers. We will explore intra-attention [Paulus, Xiong, and Socher2017] as an extra feature in our future work.

6 Related Work

Extractive summarization methods [Erkan and Radev2004, McDonald2007, Wong, Wu, and Li2008] produce summaries by concatenating several sentences or words found directly in the original texts. Several methods have been used to select the summary-worthy sentences, including binary classifiers [Kupiec, Pedersen, and Chen1995]

, Markov models

[Conroy and O’leary2001], graphic model [Erkan and Radev2004, Mihalcea2005]

and integer linear programming (ILP)

[Woodsend and Lapata2010]. Compared to traditional methods which heavily rely on human-engineered features, neural network based approaches [Kågebäck et al.2014, Cheng and Lapata2016, Nallapati, Zhai, and Zhou2017, Narayan et al.2017] have gained popularity rapidly. The general idea of these methods is to regard the extractive summary as a sequence classification problem and adopt RNN like networks. Besides, attention-based framework can perform better by attending to the whole document when extracting a word (or sentence) [Cheng and Lapata2016]. Our work is based on extractive framework as well. Besides a deep attentional RNN network, we also employ explicit semantic features such as TF-IDF score and NER tags, making our model more informative.

On the other hand, abstractive summarization methods [Chen et al.2016, Nallapati et al.2016, See, Liu, and Manning2017], which have the ability to generate text beyond the original input text, in most cases, can produce more coherent and concise summaries. These approaches are mostly centered on the attention mechanism and augmented with recurrent decoders [Chopra, Auli, and Rush2016], Abstract Meaning Representations [Takase et al.], hierarchical networks [Nallapati et al.2016] and pointer network [See, Liu, and Manning2017]. However, It is not necessary to use abstractive methods in short title extraction problem in this paper as the input is product title, which already contains the required informative words.

Our work can be comfortably set in the area of short text summarization. This line of research is in fact essentially sentence compression working on short text inputs such as tweets, microblogs or single sentence. Recent advances typically contribute to improving seq-to-seq learning, or attentional RNN encoder-decoder structures [Chopra, Auli, and Rush2016, Nallapati et al.2016]. While these methods are mostly abstractive, we use an extractive framework, combining with deep attentional RNN network and explicit semantic features, due to different scenarios of summarization problem.

The most related work is [Wang et al.2018], since they also try to compress product title in E-commerce. They use user search log as an external knowledge to guide the model and regard short title as some kind of query, to serve the purpose of improving online business values. Differently, user search log is not necessary in our model, and our goal is to make the short title as real as if it is written by human.

7 Conclusion

To the best of our knowledge, this is the first piece of work that focuses on extractive summarization for E-commerce product title. We propose a deep and wide model, combining attentional RNN framework with rich semantic features such as TF-IDF scores and NER tags. Our model outperforms several popular summarization models, and achieves ROUGE-1 F1 score of 0.725. Besides, the result of online A/B testing shows substantial benefits of our model in real online shopping scenario. Possible future works include handling similar terms that appear in the short titles generated by our model.

References