Spam filtering on forums: A synthetic oversampling based approach for imbalanced data classification

09/10/2019
by   Pratik Ratadiya, et al.
0

Forums play an important role in providing a platform for community interaction. The introduction of irrelevant content or spam by individuals for commercial and social gains tends to degrade the professional experience presented to the forum users. Automated moderation of the relevancy of posted content is desired. Machine learning is used for text classification and finds applications in spam email detection, fraudulent transaction detection etc. The balance of classes in training data is essential in the case of classification algorithms to make the learning efficient and accurate. However, in the case of forums, the spam content is sparse compared to the relevant content giving rise to a bias towards the latter while training. A model trained on such biased data will fail to classify a spam sample. An approach based on Synthetic Minority Over-sampling Technique(SMOTE) is presented in this paper to tackle imbalanced training data. It involves synthetically creating new minority class samples from the existing ones until balance in data is achieved. The enhanced data is then passed through various classifiers for which the performance is recorded. The results were analyzed on the data of forums of Spoken Tutorial, IIT Bombay over standard performance metrics and revealed that models trained after Synthetic Minority oversampling outperform the ones trained on imbalanced data by substantial margins. An empirical comparison of the results obtained by both SMOTE and without SMOTE for various supervised classification algorithms have been presented in this paper. Synthetic oversampling proves to be a critical technique for achieving uniform class distribution which in turn yields commendable results in text classification. The presented approach can be further extended to content categorization on educational websites thus helping to improve the overall digital learning experience.

READ FULL TEXT

page 1

page 3

research
10/17/2019

WOTBoost: Weighted Oversampling Technique in Boosting for imbalanced learning

Machine learning classifiers often stumble over imbalanced datasets wher...
research
11/25/2019

A Self-Adaptive Synthetic Over-Sampling Technique for Imbalanced Classification

Traditionally, in supervised machine learning, (a significant) part of t...
research
08/05/2015

Empirical Similarity for Absent Data Generation in Imbalanced Classification

When the training data in a two-class classification problem is overwhel...
research
09/22/2020

Gamma distribution-based sampling for imbalanced data

Imbalanced class distribution is a common problem in a number of fields ...
research
12/21/2022

Text classification in shipping industry using unsupervised models and Transformer based supervised models

Obtaining labelled data in a particular context could be expensive and t...
research
09/03/2019

Minimizing the Societal Cost of Credit Card Fraud with Limited and Imbalanced Data

Machine learning has automated much of financial fraud detection, notify...
research
03/27/2023

Beyond Toxic: Toxicity Detection Datasets are Not Enough for Brand Safety

The rapid growth in user generated content on social media has resulted ...

Please sign up or login with your details

Forgot password? Click here to reset