Accelerating Text Mining Using Domain-Specific Stop Word Lists

11/18/2020
by   Farah Alshanik, et al.
0

Text preprocessing is an essential step in text mining. Removing words that can negatively impact the quality of prediction algorithms or are not informative enough is a crucial storage-saving technique in text indexing and results in improved computational efficiency. Typically, a generic stop word list is applied to a dataset regardless of the domain. However, many common words are different from one domain to another but have no significance within a particular domain. Eliminating domain-specific common words in a corpus reduces the dimensionality of the feature space, and improves the performance of text mining tasks. In this paper, we present a novel mathematical approach for the automatic extraction of domain-specific words called the hyperplane-based approach. This new approach depends on the notion of low dimensional representation of the word in vector space and its distance from hyperplane. The hyperplane-based approach can significantly reduce text dimensionality by eliminating irrelevant features. We compare the hyperplane-based approach with other feature selection methods, namely ḩi̧2 and mutual information. An experimental study is performed on three different datasets and five classification algorithms, and measure the dimensionality reduction and the increase in the classification performance. Results indicate that the hyperplane-based approach can reduce the dimensionality of the corpus by 90 the domain-specific words is significantly lower than mutual information.

READ FULL TEXT

Authors

09/05/2019

Fusing Vector Space Models for Domain-Specific Applications

We address the problem of tuning word embeddings for specific use cases ...
02/06/2020

Towards Semantic Noise Cleansing of Categorical Data based on Semantic Infusion

Semantic Noise affects text analytics activities for the domain-specific...
08/10/2015

Measuring Word Significance using Distributed Representations of Words

Distributed representations of words as real-valued vectors in a relativ...
05/01/2021

Stochastic Mutual Information Gradient Estimation for Dimensionality Reduction Networks

Feature ranking and selection is a widely used approach in various appli...
07/02/2016

Text comparison using word vector representations and dimensionality reduction

This paper describes a technique to compare large text sources using wor...
10/01/2019

Essentia: Mining Domain-specific Paraphrases with Word-Alignment Graphs

Paraphrases are important linguistic resources for a wide variety of NLP...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.