-
Modeling Product Search Relevance in e-Commerce
With the rapid growth of e-Commerce, online product search has emerged a...
read it
-
Product Insights: Analyzing Product Intents in Web Search
Web search engines are frequently used to access information about produ...
read it
-
Neural IR Meets Graph Embedding: A Ranking Model for Product Search
Recently, neural models for information retrieval are becoming increasin...
read it
-
Distant Supervision for E-commerce Query Segmentation via Attention Network
The booming online e-commerce platforms demand highly accurate approache...
read it
-
Towards a simplified ontology for better e-commerce search
Query Understanding is a semantic search method that can classify tokens...
read it
-
Semantic Product Search
We study the problem of semantic matching in product search, that is, gi...
read it
-
End-to-End Neural Ranking for eCommerce Product Search: an application of task models and textual embeddings
We consider the problem of retrieving and ranking items in an eCommerce ...
read it
A Comparison of Supervised Learning to Match Methods for Product Search
The vocabulary gap is a core challenge in information retrieval (IR). In e-commerce applications like product search, the vocabulary gap is reported to be a bigger challenge than in more traditional application areas in IR, such as news search or web search. As recent learning to match methods have made important advances in bridging the vocabulary gap for these traditional IR areas, we investigate their potential in the context of product search. In this paper we provide insights into using recent learning to match methods for product search. We compare both effectiveness and efficiency of these methods in a product search setting and analyze their performance on two product search datasets, with 50,000 queries each. One is an open dataset made available as part of a community benchmark activity at CIKM 2016. The other is a proprietary query log obtained from a European e-commerce platform. This comparison is conducted towards a better understanding of trade-offs in choosing a preferred model for this task. We find that (1) models that have been specifically designed for short text matching, like MV-LSTM and DRMMTKS, are consistently among the top three methods in all experiments; however, taking efficiency and accuracy into account at the same time, ARC-I is the preferred model for real world use cases; and (2) the performance from a state-of-the-art BERT-based model is mediocre, which we attribute to the fact that the text BERT is pre-trained on is very different from the text we have in product search. We also provide insights into factors that can influence model behavior for different types of query, such as the length of retrieved list, and query complexity, and discuss the implications of our findings for e-commerce practitioners, with respect to choosing a well performing method.
READ FULL TEXT
Comments
There are no comments yet.