Accelerating Text Mining Using Domain-Specific Stop Word Lists

11/18/2020
by   Farah Alshanik, et al.
0

Text preprocessing is an essential step in text mining. Removing words that can negatively impact the quality of prediction algorithms or are not informative enough is a crucial storage-saving technique in text indexing and results in improved computational efficiency. Typically, a generic stop word list is applied to a dataset regardless of the domain. However, many common words are different from one domain to another but have no significance within a particular domain. Eliminating domain-specific common words in a corpus reduces the dimensionality of the feature space, and improves the performance of text mining tasks. In this paper, we present a novel mathematical approach for the automatic extraction of domain-specific words called the hyperplane-based approach. This new approach depends on the notion of low dimensional representation of the word in vector space and its distance from hyperplane. The hyperplane-based approach can significantly reduce text dimensionality by eliminating irrelevant features. We compare the hyperplane-based approach with other feature selection methods, namely ḩi̧2 and mutual information. An experimental study is performed on three different datasets and five classification algorithms, and measure the dimensionality reduction and the increase in the classification performance. Results indicate that the hyperplane-based approach can reduce the dimensionality of the corpus by 90 the domain-specific words is significantly lower than mutual information.

READ FULL TEXT
research
03/18/2023

Stop Words for Processing Software Engineering Documents: Do they Matter?

Stop words, which are considered non-predictive, are often eliminated in...
research
02/06/2020

Towards Semantic Noise Cleansing of Categorical Data based on Semantic Infusion

Semantic Noise affects text analytics activities for the domain-specific...
research
09/05/2019

Fusing Vector Space Models for Domain-Specific Applications

We address the problem of tuning word embeddings for specific use cases ...
research
08/10/2015

Measuring Word Significance using Distributed Representations of Words

Distributed representations of words as real-valued vectors in a relativ...
research
10/25/2022

New wrapper method based on normalized mutual information for dimension reduction and classification of hyperspectral images

Feature selection is one of the most important problems in hyperspectral...
research
07/02/2016

Text comparison using word vector representations and dimensionality reduction

This paper describes a technique to compare large text sources using wor...
research
02/06/2023

Data Selection for Language Models via Importance Resampling

Selecting a suitable training dataset is crucial for both general-domain...

Please sign up or login with your details

Forgot password? Click here to reset