Text Classification Algorithms: A Survey

04/17/2019 ∙ by Kamran Kowsari, et al. ∙ 1

In recent years, there has been an exponential growth in the number of complex documents and texts that require a deeper understanding of machine learning methods to be able to accurately classify texts in many applications. Many machine learning approaches have achieved surpassing results in natural language processing. The success of these learning algorithms relies on their capacity to understand complex models and non-linear relationships within data. However, finding suitable structures, architectures, and techniques for text classification is a challenge for researchers. In this paper, a brief overview of text classification algorithms is discussed. This overview covers different text feature extractions, dimensionality reduction methods, existing algorithms and techniques, and evaluations methods. Finally, the limitations of each technique and their application in the real-world problem are discussed.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Text classification problems have been widely studied and addressed in many real applications Jiang et al. (2018); Kowsari et al. (2017); McCallum et al. (1998); Kowsari et al. (2018); Heidarysafa et al. (2018); Lai et al. (2015); Aggarwal and Zhai (2012a, b) over the last few decades. Especially with recent breakthroughs in Natural Language Processing (NLP) and text mining, many researchers are now interested in developing applications that leverage text classification methods. Most text classification and document categorization systems can be deconstructed into the following four phases: Feature extraction, dimension reductions, classifier selection, and evaluations. In this paper, we discuss the structure and technical implementations of text classification systems in terms of the pipeline illustrated in Figure 1 (The source code and the results are shared as free tools at https://github.com/kk7nc/Text_Classification).

The initial pipeline input consists of some raw text data set. In general, text data sets contain sequences of text in documents as  where  refers to a data point (i.e., document, text segment) with  number of sentences such that each sentence includes  words with  letters. Each point is labeled with a class value from a set of  different discrete value indices Aggarwal and Zhai (2012a).

Then, we should create a structured set for our training purposes which call this section Feature Extraction. The dimensionality reduction step is an optional part of the pipeline which could be part of the classification system (e.g., if we use Term Frequency-Inverse Document Frequency (TF-IDF) as our feature extraction and in train set we have  unique words, computational time is very expensive, so we could reduce this option by bringing feature space in other dimensional space). The most significant step in document categorization is choosing the best classification algorithm. The other part of the pipeline is the evaluation step which is divided into two parts (prediction the test set and evaluating the model). In general, the text classification system contains four different levels of scope that can be applied:

  1. Document level: In the document level, the algorithm obtains the relevant categories of a full document.

  2. Paragraph level: In the paragraph level, the algorithm obtains the relevant categories of a single paragraph (a portion of a document).

  3. Sentence level: In the sentence level, obtains the relevant categories of a single sentence (a portion of a paragraph).

  4. Sub-sentence level: In the sub-sentence level, the algorithm obtains the relevant categories of sub-expressions within a sentence (a portion of a sentence )).

Figure 1: Overview of text classification pipeline.

(I) Feature Extraction: In general, texts and documents are unstructured data sets. However, these unstructured text sequences must be converted into a structured feature space when using mathematical modeling as part of a classifier. First, the data needs to be cleaned to omit unnecessary characters and words. After the data has been cleaned, formal feature extraction methods can be applied. The common techniques of feature extractions are Term Frequency-Inverse Document Frequency (TF-IDF), Term Frequency (TF) Salton and Buckley (1988), Word2Vec Goldberg and Levy (2014)

, and Global Vectors for Word Representation (GloVe) 

Pennington et al. (2014). In Section 2, we categorize these methods as either word embedding or weighted word techniques and discuss the technical implementation details.

(II) Dimensionality Reduction: As text or document data sets often contain many unique words, data pre-processing steps can be lagged by high time and memory complexity. A common solution to this problem is simply using inexpensive algorithms. However, in some data sets, these kinds of cheap algorithms do not perform as well as expected. In order to avoid the decrease in performance, many researchers prefer to use dimensionality reduction to reduce the time and memory complexity for their applications. Using dimensionality reduction for pre-processing could be more efficient than developing inexpensive classifiers.

In Section 3

, we outline the most common techniques of dimensionality reduction, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and non-negative matrix factorization (NMF). We also discuss novel techniques for unsupervised feature extraction dimensionality reduction, such as random projection, autoencoders, and t-distributed stochastic neighbor embedding (t-SNE).

(III) Classification Techniques: The most important step of the text classification pipeline is choosing the best classifier. Without a complete conceptual understanding of each algorithm, we cannot effectively determine the most efficient model for a text classification application. In Section 4, we discuss the most popular techniques of text classification. First, we cover traditional methods of text classification, such as Rocchio classification. Next, we talk about ensemble-based learning techniques such as boosting and bagging, which have been used mainly for query learning strategies and text analysis Mamitsuka et al. (1998); Kim et al. (2000); Schapire and Singer (2000)

. One of the simplest classification algorithms is logistic regression (LR) which has been addressed in most data mining domains 

Harrell (2001); Hosmer Jr et al. (2013); Dou et al. (2018); Chen et al. (2017). In the earliest history of information retrieval as a feasible application, The Naïve Bayes Classifier (NBC) was very popular. We have a brief overview of Naïve Bayes Classifier which is computationally inexpensive and also needs a very low amount of memory Larson (2010).

Non-parametric techniques have been studied and used as classification tasks such as k-nearest neighbor (KNN

Li et al. (2001)

. Support Vector Machine (SVM) 

Manevitz and Yousef (2001); Han and Karypis (2000) is another popular technique which employs a discriminative classifier for document categorization. This technique can also be used in all domains of data mining such as bioinformatics, image, video, human activity classification, safety and security, etc. This model is also used as a baseline for many researchers to compare against their own works to highlight novelty and contributions.

Tree-based classifiers such as decision tree and random forest have also been studied with respect to document categorization 

Xu et al. (2012). Each tree-based algorithm will be covered in a separate sub-section. In recent years, graphical classifications have been considered Lafferty et al. (2001)

as a classification task such as conditional random fields (CRFs). However, these techniques are mostly used for document summarization 

Shen et al. (2007)

and automatic keyword extraction 

Zhang (2008).

Lately, deep learning approaches have achieved surpassing results in comparison to previous machine learning algorithms on tasks such as image classification, natural language processing, face recognition, etc. The success of these deep learning algorithms relies on their capacity to model complex and non-linear relationships within data 

LeCun et al. (2015).

(IV) Evaluation: The final part of the text classification pipeline is evaluation. Understanding how a model performs is essential to the use and development of text classification methods. There are many methods available for evaluating supervised techniques. Accuracy calculation is the simplest method of evaluation but does not work for unbalanced data sets Huang and Ling (2005). In Section 5, we outline the following evaluation methods for text classification algorithms: Score Lock (2002), Matthews Correlation Coefficient (MCC) Matthews (1975), receiver operating characteristics (ROC) Hanley and McNeil (1982), and area under the ROC curve (AUC) Pencina et al. (2008).

In Section 6, we talk about the limitations and drawbacks of the methods mentioned above. We also briefly compare the steps of pipeline including feature extractions, dimensionality reduction, classification techniques, and evaluation methods. The state-of-the-art techniques are compared in this section by many criteria such as architecture of their model, novelty of the work, feature extraction technique, corpus (the data set/s used), validation measure, and limitation of each work. Finding the best system for an application requires choosing a feature extraction method. This choice completely depends on the goal and data set of an application as some feature extraction techniques are not efficient for a specific application. For example, since GloVe is trained on Wikipedia and when used for short text messages like short message service (SMS), this technique does not perform as well as TF-IDF. Additionally, limited data points that this model cannot be trained as well as other techniques due to the small amount of data. The next step or in this pipeline, is a classification technique, where we briefly talk about the limitation and drawbacks of each technique.

In Section 7, we describe the text and document classification applications. Text classification is a major challenge in many domains and fields for researchers. Information retrieval systems Jacobs (2014) and search engine Croft et al. (2010); Yammahi et al. (2014) applications commonly make use of text classification methods. Extending from these applications, text classification could also be used for applications such as information filtering (e.g., email and text message spam filtering) Chu et al. (2010). Next, we talk about adoption of document categorization in public health Gordon Jr (1983) and human behavior Nobles et al. (2018). Another area that has been helped by text classification is document organization and knowledge management. Finally, we will discuss recommender systems which are extensively used in marketing and advertising.

2 Text Preprocessing

Feature extraction and pre-processing are crucial steps for text classification applications. In this section, we introduce methods for cleaning text data sets, thus removing implicit noise and allowing for informative featurization. Furthermore, we discuss two common methods of text feature extraction: Weighted word and word embedding techniques.

2.1 Text Cleaning and Pre-processing

Most text and document data sets contain many unnecessary words such as stopwords, misspelling, slang, etc. In many algorithms, especially statistical and probabilistic learning algorithms, noise and unnecessary features can have adverse effects on system performance. In this section, we briefly explain some techniques and methods for text cleaning and pre-processing text data sets.

2.1.1 Tokenization

Tokenization is a pre-processing method which breaks a stream of text into words, phrases, symbols, or other meaningful elements called tokens Gupta and Malhotra (2015); Verma et al. (2014). The main goal of this step is the investigation of the words in a sentence Verma et al. (2014). Both text classification and text mining require a parser which processes the tokenization of the documents, for example:
sentence Aggarwal (2018):

After sleeping for four hours, he decided to sleep for another four.

In this case, the tokens are as follows:

{ “After” “sleeping” “for” “four” “hours” “he” “decided” “to” “sleep” “for” “another” “four” }.

2.1.2 Stop Words

Text and document classification includes many words which do not contain important significance to be used in classification algorithms, such as {“a”, “about”, “above”, “across”, “after”, “afterwards”, “again”,}. The most common technique to deal with these words is to remove them from the texts and documents Saif et al. (2014).

2.1.3 Capitalization

Text and document data points have a diversity of capitalization to form a sentence. Since documents consist of many sentences, diverse capitalization can be hugely problematic when classifying large documents. The most common approach for dealing with inconsistent capitalization is to reduce every letter to lower case. This technique projects all words in text and document into the same feature space, but it causes a significant problem for the interpretation of some words (e.g., “US” (United States of America) to “us” (pronoun)) Gupta et al. (2009). Slang and abbreviation converters can help account for these exceptions Dalal and Zaveri (2011).

2.1.4 Slang and Abbreviation

Slang and abbreviation are other forms of text anomalies that are handled in the pre-processing step. An abbreviation Whitney and Evans (2010) is a shortened form of a word or phrase which contain mostly first letters form the words, such as SVM which stands for Support Vector Machine.

Slang is a subset of the language used in informal talk or text that has different meanings such as “lost the plot”, which essentially means that they’ve gone mad Helm (2003). A common method for dealing with these words is converting them into formal language Dhuliawala et al. (2016).

2.1.5 Noise Removal

Most of the text and document data sets contain many unnecessary characters such as punctuation and special characters. Critical punctuation and special characters are important for human understanding of documents, but it can be detrimental for classification algorithms Pahwa et al. .

2.1.6 Spelling Correction

Spelling correction is an optional pre-processing step. Typos (short for typographical errors) are commonly present in texts and documents, especially in social media text data sets (e.g., Twitter). Many algorithms, techniques, and methods have addressed this problem in NLP Mawardi et al. (2018). Many techniques and methods are available for researchers including hashing-based and context-sensitive spelling correction techniques Dziadek et al. (2017), as well as spelling correction using Trie and Damerau–Levenshtein distance bigram Mawardi et al. (2018).

2.1.7 Stemming

In NLP, one word could appear in different forms (i.e., singular and plural noun form) while the semantic meaning of each form is the same Spirovski et al. (2018). One method for consolidating different forms of a word into the same feature space is stemming. Text stemming modifies words to obtain variant word forms using different linguistic processes such as affixation (addition of affixes) Singh and Gupta (2016); Sampson (2005). For example, the stem of the word “studying” is “study”.

2.1.8 Lemmatization

Lemmatization is a NLP process that replaces the suffix of a word with a different one or removes the suffix of a word completely to get the basic word form (lemma) Plisson et al. (2004); Korenius et al. (2004); Sampson (2005).

2.2 Syntactic Word Representation

Many researchers have worked on this text feature extraction technique to solve the loosing syntactic and semantic relation between words. Many researchers addressed novel techniques for solving this problem, but many of these techniques still have limitations. In Caropreso and Matwin (2006)

, a model was introduced in which the usefulness of including syntactic and semantic knowledge in the text representation for the selection of sentences comes from technical genomic texts. The other solution for syntactic problem is using the n-gram technique for feature extraction.

2.2.1 N-Gram

The n-gram technique is a set of  which occurs “in that order” in a text set. This is not a representation of a text, but it could be used as a feature to represent a text.

BOW is a representation of a text using its words  which loses their order (syntactic). This model is very easy to obtain and the text can be represented through a vector, generally of a manageable size of the text. On the other hand,  is a feature of BOW for a representation of a text using . It is very common to use  and . In this way, the text feature extracted could detect more information in comparison to .

An Example of

After sleeping for four hours, he decided to sleep for another four.

In this case, the tokens are as follows:

{ “After sleeping”, “sleeping for”, “for four”, “four hours”, “four he” “he decided”, “decided to”, “to sleep”, “sleep for”, “for another”, “another four” }.

An Example of

After sleeping for four hours, he decided to sleep for another four.

In this case, the tokens are as follows:

{ “After sleeping for”, “sleeping for four”, “four hours he”, “ hours he decided”, “he decided to”, “to sleep for”, “sleep for another”, “for another four” }.

2.2.2 Syntactic N-Gram

In Sidorov et al. (2012), syntactic n-grams are discussed which is defined by paths in syntactic dependency or constituent trees rather than the linear structure of the text.

2.3 Weighted Words

The most basic form of weighted word feature extraction is TF, where each word is mapped to a number corresponding to the number of occurrences of that word in the whole corpora. Methods that extend the results of TF generally use word frequency as a boolean or logarithmically scaled weighting. In all weight words methods, each document is translated to a vector (with length equal to that of the document) containing the frequency of the words in that document. Although this approach is intuitive, it is limited by the fact that particular words that are commonly used in the language may dominate such representations.

2.3.1 Bag of Words (BoW)

The bag-of-words model (BoW model) is a reduced and simplified representation of a text document from selected parts of the text, based on specific criteria, such as word frequency.

The BoW technique is used in several domains such as computer vision, NLP, Bayesian spam filters, as well as document classification and information retrieval by Machine Learning.

In a BoW, a body of text, such as a document or a sentence, is thought of like a bag of words. Lists of words are created in the BoW process. These words in a matrix are not sentences which structure sentences and grammar, and the semantic relationship between these words are ignored in their collection and construction. The words are often representative of the content of a sentence. While grammar and order of appearance are ignored, multiplicity is counted and may be used later to determine the focus points of the documents.

Here is an example of BoW:

Document

“As the home to UVA’s recognized undergraduate and graduate degree programs in systems engineering. In the UVA Department of Systems and Information Engineering, our students are exposed to a wide range of range”

Bag-of-Words (BoW)

{“As”, “the”, “home”, “to”, “UVA’s”, “recognized”, “undergraduate”, “and”, “graduate”, “degree”, “program”, “in”, “systems”, “engineering”, “in”, “Department”, “Information”,“students”, “ ”,“are”, “exposed”, “wide”, “range” }

Bag-of-Feature (BoF)

Feature = {1,1,1,3,2,1,2,1,2,3,1,1,1,2,1,1,1,1,1,1}

2.3.2 Limitation of Bag-of-Words

Bag0of-words models encode every word in the vocabulary as one-hot-encoded vector e.g., for the vocabulary of size

, each word is represented by a dimensional sparse vector with 

at index corresponding to the word and 0 at every other index. As vocabulary may potentially run into millions, bag-of-word models face scalability challenges (e.g., “This is good” and “Is this good” have exactly the same vector representation). The technical problem of the bag-of-word is also the main challenge for the computer science and data science community.

Term frequency, also called bag-of-words, is the simplest technique of text feature extraction. This method is based on counting the number of words in each document and assigning it to the feature space.

2.3.3 Term Frequency-Inverse Document Frequency

K. Sparck Jones Sparck Jones (1972) proposed Inverse Document Frequency (IDF) as a method to be used in conjunction with term frequency in order to lessen the effect of implicitly common words in the corpus. IDF assigns a higher weight to words with either high or low frequencies term in the document. This combination of TF and IDF is well known as Term Frequency-Inverse document frequency (TF-IDF). The mathematical representation of the weight of a term in a document by TF-IDF is given in Equation (1).

(1)

Here N is the number of documents and  is the number of documents containing the term t in the corpus. The first term in Equation (1) improves the recall while the second term improves the precision of the word embedding Tokunaga and Makoto (1994). Although TF-IDF tries to overcome the problem of common terms in the document, it still suffers from some other descriptive limitations. Namely, TF-IDF cannot account for the similarity between the words in the document since each word is independently presented as an index. However, with the development of more complex models in recent years, new methods, such as word embedding, have been presented that can incorporate concepts such as similarity of words and part of speech tagging.

2.4 Word Embedding

Even though we have syntactic word representations, it does not mean that the model captures the semantics meaning of the words. On the other hand, bag-of-word models do not respect the semantics of the word. For example, words “airplane”, “aeroplane”, “plane”, and “aircraft” are often used in the same context. However, the vectors corresponding to these words are orthogonal in the bag-of-words model. This issue presents a serious problem to understanding sentences within the model. The other problem in the bag-of-word is that the order of words in the phrase is not respected. The does not solve this problem so a similarity needs to be found for each word in the sentence. Many researchers worked on word embedding to solve this problem. The Skip-gram and continuous bag-of-words (CBOW) models of Mikolov et al. (2013a) propose a simple single-layer architecture based on the inner product between two word vectors.

Word embedding is a feature learning technique in which each word or phrase from the vocabulary is mapped to a  dimension vector of real numbers. Various word embedding methods have been proposed to translate unigrams into understandable input for machine learning algorithms. This work focuses on Word2Vec, GloVe, and FastText, three of the most common methods that have been successfully used for deep learning techniques. Recently, the Novel technique of word representation was introduced where word vectors depend on the context of the word called “Contextualized Word Representations” or “Deep Contextualized Word Representations”.

2.4.1 Word2Vec

T. Mikolov et al. Mikolov et al. (2013a, b) presented "word to vector"

representation as an improved word embedding architecture. The Word2Vec approach uses shallow neural networks with two hidden layers, continuous bag-of-words (CBOW), and the Skip-gram model to create a high dimension vector for each word. The Skip-gram model dives a corpus of words 

and context  Goldberg and Levy (2014)

. The goal is to maximize the probability:

(2)

where  refers to Text, and is parameter of .

Figure 2 shows a simple CBOW model which tries to find the word based on previous words, while Skip-gram tries to find words that might come in the vicinity of each word. The weights between the input layer and output layer represent  Rong (2014) as a matrix of .

(3)

This method provides a very powerful tool for discovering relationships in the text corpus as well as similarity between words. For example, this embedding would consider the two words such as “big” and “bigger” close to each other in the vector space it assigns them.

Figure 2: The continuous bag-of-words (CBOW) architecture predicts the current word based on the context, and the Skip-gram predicts surrounding words based on the given current word Mikolov et al. (2013a).

Continuous Bag-of-Words Model

The continuous bag-of-words model is represented by multiple words for a given target of words. For example, the word “airplane” and “military” as context words for “air-force” as the target word. This consists of replicating the input to hidden layer connections times which is the number of context words Mikolov et al. (2013a). Thus, the bag-of-words model is mostly used to represent an unordered collection of words as a vector. The first thing to do is create a vocabulary, which means all the unique words in the corpus. The output of the shallow neural network will be  that the task as “predicting the word given its context”. The number of words used depends on the setting for the window size (common size is 4–5 words).

Continuous Skip-Gram Model

Another model architecture which is very similar to CBOW Mikolov et al. (2013a) is the continuous Skip-gram model, however this model, instead of predicting the current word based on the context, tries to maximize classification of a word based on another word in the same sentence. The continuous bag-of-words model and continuous Skip-gram model are used to keep syntactic and semantic information of sentences for machine learning algorithms.

2.4.2 Global Vectors for Word Representation (GloVe)

Another powerful word embedding technique that has been used for text classification is Global Vectors (GloVe) Pennington et al. (2014). The approach is very similar to the Word2Vec method, where each word is presented by a high dimension vector and trained based on the surrounding words over a huge corpus. The pre-trained word embedding used in many works is based on 400,000 vocabularies trained over Wikipedia 2014 and Gigaword 5 as the corpus and 50 dimensions for word presentation. GloVe also provides other pre-trained word vectorizations with 100, 200, 300 dimensions which are trained over even bigger corpora, including Twitter content. Figure 3 shows a visualization of the word distances over a sample data set using the same t-SNE technique Maaten and Hinton (2008). The objective function is as follows:

(4)

where  refers to the word vector of word , and  denotes to the probability of word  to occur in the context of word .

Figure 3: GloVe: Global Vectors for Word Representation.

2.4.3 FastText

Many other word embedding representations ignore the morphology of words by assigning a distinct vector to each word Bojanowski et al. (2016). Facebook AI Research lab released a novel technique to solve this issue by introducing a new word embedding method called FastText. Each word, , is represented as a bag of character n-gram. For example, given the word “introduce” and , FastText will produce the following representation composed of character tri-grams:

Note that the sequence <int>, corresponding to the word here is different from the tri-gram “int” from the word introduce.

Suppose we have a dictionary of n-grams of size , and given a word  which is associated as a vector representation to each n-gram . The obtained scoring function Bojanowski et al. (2016) in this case is:

(5)

where  .

Facebook published pre-trained word vectors for  languages which are trained on Wikipedia using FastText based on  dimension. The FastText used the Skip-gram model Bojanowski et al. (2016) with default parameters.

2.4.4 Contextualized Word Representations

Contextualized word representations are another word embedding technique which is based on the context2vec Melamud et al. (2016)

technique introduced by B. McCann et al. The context2vec method uses bidirectional long short-term memory (LSTM). M.E. Peters et al. 

Peters et al. (2018) built upon this technique to create the deep contextualized word representations technique. This technique contains both the main feature of word representation: (I) complex characteristics of word use (e.g., syntax and semantics) and (II) how these uses vary across linguistic contexts (e.g., to model polysemy) Peters et al. (2018).

The main idea behind these word embedding techniques is that the resulting word vectors are learned from a bidirectional language model (biLM), which consist of both forward and backward LMs.

The forward LMs are as follows:

(6)

The backward LMs are as follows:

(7)

This formulation jointly maximizes the log-likelihood of the forward and backward directions as follows:

(8)

where is the token representation and

refers to the softmax layer. Then, ELMo is computed as a task-specific weighting for all biLM layers as follows:

(9)

where is calculated by:

(10)

where stands for softmax-normalized weights, and is the scalar parameter.

2.5 Limitations

Although the continuous bag-of-words model and continuous Skip-gram model are used to keep syntactic and semantic information of per-sentences for machine learning algorithms, there remains the issue how to keep full meaning of coherent documents for machine learning.

Example:

Document: {“Maryam went to Paris on July 4th, 2018. She missed the independence day fireworks and celebrations. This day is a federal holiday in the United States commemorating the Declaration of Independence of the United States on July 4, 1776. The Continental Congress declared that the thirteen American colonies were no longer subject to the monarch of Britain and were now united, free, and independent states. She wants to stay in the country for next year and celebrate with her friends.”}

Sentence level of this document:

S1: {“Maryam went to Paris on July 4th, 2018.”}
S2: {“She missed the independence day fireworks and celebrations.”}
S3: {“This day is a federal holiday in the United States commemorating the Declaration of Independence of the United States on July 4, 1776.”}
S4: {“The Continental Congress declared that the thirteen American colonies were no longer subject to the monarch of Britain and were now united, free, and independent states.”}
S5: {“She has a plan for next year to stay in the country and celebrate with her friends.”}

Limitation:

Figure 4 shows how the feature extraction fails for per-sentence level. The purple color shown in figure is the brief history of “This day”. Furthermore, “This day” refers to  “July 4th”. In S5, “She” refers to the S1 “Maryam”.

Figure 4: Limitation of document feature extraction by per-sentence level.

3 Dimensionality Reduction

Text sequences in term-based vector models consist of many features. Thus, time complexity and memory consumption are very expensive for these methods. To address this issue, many researchers use dimensionality reduction to reduce the size of feature space. In this section, existing dimensionality reduction algorithms are discussed in detail.

3.1 Component Analysis

3.1.1 Principal Component Analysis (PCA)

Principal component analysis (PCA) is the most popular technique in multivariate analysis and dimensionality reduction. PCA is a method to identify a subspace in which the data approximately lies 

Abdi and Williams (2010)

. This means finding new variables that are uncorrelated and maximizing the variance to “preserve as much variability as possible” 

Jolliffe and Cadima (2016).

Suppose a data set is given and for each i (). The th column of matrix is vector, that is the observations on the th variable. The linear combination of s can be written as:

(11)

where is a vector of constants . The variance of this linear combination can be given as:

(12)

where is the sample co-variance matrix. The goal is to find the linear combination with maximum variance. This translates into maximizing , where is a Lagrange multiplier.

PCA can be used as a pre-processing tool to reduce the dimension of a data set before running a supervised learning algorithm on it (

as inputs). PCA is also a valuable tool as a noise reduction algorithm and can be helpful in avoiding the over-fitting problem Ng (2015). kernel principal component analysis (KPCA) is another dimensionality reduction method that generalizes linear PCA into the nonlinear case by using the kernel method Cao et al. (2003).

3.1.2 Independent Component Analysis (ICA)

Independent component analysis (ICA) was introduced by H. Jeanny Hérault (1984). This technique was then further developed by C. Jutten and J. Herault Jutten and Herault (1991)

. ICA is a statistical modeling method where the observed data are expressed as a linear transformation 

Hyvärinen et al. (2001). Assume that 4 linear mixtures () are observed where independent components:

(13)

The vector-matrix notation is written as:

(14)

Denoting them by , the model can also be written Hyvärinen and Oja (2000) as follows:

(15)

3.2 Linear Discriminant Analysis (LDA)

LDA is a commonly used technique for data classification and dimensionality reduction Sugiyama (2007). LDA is particularly helpful where the within-class frequencies are unequal and their performances have been evaluated on randomly generated test data. Class-dependent and class-independent transformation are two approaches to LDA in which the ratio of between class variance to within class variance and the ratio of the overall variance to within class variance are used respectively Balakrishnama and Ganapathiraju (1998).

Let which be -dimensional samples and be associated target or output Sugiyama (2007), where  is the number of documents and  is the number of categories. The number of samples in each class is calculated as follows:

(16)

where

(17)

The generalization between the class scatter matrix is defined as follows:

(18)

where

(19)

Respect to  projection vector of  that can be projected into  matrix:

(20)
(21)

Thus, the  (mean) vector and  matrices (scatter matrices) for the projected to lower dimension as follows:

(22)
(23)

If the projection is not scalar ( dimensions), the determinant of the scatter matrices will be used as follows:

(24)

From the fisher discriminant analysis (FDA) Sugiyama (2006, 2007), we can re-write the equation as:

(25)

3.3 Non-Negative Matrix Factorization (NMF)

Non-negative matrix factorization (NMF) or non-negative matrix approximation has been shown to be a very powerful technique for very high dimensional data such as text and sequences analysis 

Pauca et al. (2004). This technique is a promising method for dimension reduction Tsuge et al. (2001). In this section, a brief overview of NMF is discussed for text and document data sets. Given a non-negative  in matrix  is an approximation of:

(26)

where  and . Suppose , then the product  can be regarded as a compressed form of the data in . Then  and  are the corresponding columns of  and . The computation of each corresponding column can be re-written as follows:

(27)

The computational time of each iteration, as introduced by S. Tsuge et al. Tsuge et al. (2001), can be written as follows:

(28)
(29)

Thus, the local minimum of the objective function is calculated as follows:

(30)

The maximization of the objective function can be re-written as follows:

(31)

The objective function, given by the Kullback–Leibler Kullback and Leibler (1951); Johnson and Sinanovic (2001) divergence, is defined as follows:

(32)
(33)
(34)

This NMF-based dimensionality reduction contains the following 5 steps Tsuge et al. (2001) (step VI is optional but commonly used in information retrieval:

  1. [label=(),labelsep=4.5mm]

  2. Extract index term after pre-processing stem like feature extraction and text cleaning as discussed in Section 2. Then we have  documents with  features;

  3. Create  documents (), where vector where  refers to local weights of  term in document , and  is global weights for document ;

  4. Apply NMF to all terms in all documents one by one;

  5. Project the trained document vector into -dimensional space;

  6. Using the same transformation, map the test set into the -dimensional space;

  7. Calculate the similarity between the transformed document vectors and a query vector.

3.4 Random Projection

Random projection is a novel technique for dimensionality reduction which is mostly used for high volume data set or high dimension feature spaces. Texts and documents, especially with weighted feature extraction, generate a huge number of features. Many researchers have applied random projection to text data Bingham and Mannila (2001); Chakrabarti et al. (2003) for text mining, text classification, and dimensionality reduction. In this section, we review some basic random projection techniques. As shown in Figure 5, the overview of random projection is shown.

3.4.1 Random Kitchen Sinks

The key idea of random kitchen sinks Rahimi and Recht (2009) is sampling via monte carlo integration Morokoff and Caflisch (1995) to approximate the kernel as part of dimensionality reduction. This technique works only for shift-invariant kernel:

(35)

where shift-invariant kernel, which is an approximation kernel of:

(36)
(37)

where is the target number of samples,

is a probability distribution,

stands for random direction, and where is the number of features and is the target.

Figure 5: The plot on the left shows how we generate random direction, and the plot on the right shows how we project the data set into the new space using complex numbers.
(38)
(39)
(40)
(41)

where the

is uniform random variable (

).

3.4.2 Johnson Lindenstrauss Lemma

William B. Johnson and Joram Lindenstrauss Johnson et al. (1986); Dasgupta and Gupta (2003) proved that for any point Euclidean space can be bounded in for any   and   and : . With to the lower bound of the success probability.

(42)

Johnson Lindenstrauss Lemma Proof Vempala, S. S. (2009):

For any sets of data point from where and random variable :

(43)

If we let :

(44)

Lemma 1 Proof Vempala, S. S. (2009):

Let be a random variable with degrees of freedom, then for

(45)

We start with Markov’s inequality Mao and Yuan (2016):

(46)
(47)

where and using the fact of ; thus, we can proof and the is similar.

(48)
(49)

Lemma 2 Proof Vempala, S. S. (2009):

Let be a random variable of and , is data points then for any  :

(50)

In Equation (50), is the random approximation value and , so we can rewrite the Equation (50) by .

Call and   thus:

(51)

where we can prove Equation (51) by using Equation (45):

(52)

3.5 Autoencoder

An autoencoder is a type of neural network that is trained to attempt to copy its input to its output Goodfellow et al. (2016). The autoencoder has achieved great success as a dimensionality reduction method via the powerful reprehensibility of neural networks Wang et al. (2014). The first version of autoencoder was introduced by D.E. Rumelhart et al. Rumelhart et al. (1985) in 1985. The main idea is that one hidden layer between input and output layers has fewer units Liang et al. (2017) and could thus be used to reduce the dimensions of a feature space. Especially for texts, documents, and sequences that contain many features, using an autoencoder could help allow for faster, more efficient data processsing.

3.5.1 General Framework

As shown in Figure 6, the input and output layers of an autoencoder contain  units where , and hidden layer  contains  units with respect to  Baldi (2012). For this technique of dimensionality reduction, the dimensions of the final feature space are reduced from . The encoder representation involves a sum of the representation of all words (for bag-of-words), reflecting the relative frequency of each word AP et al. (2014):

(53)

where  is an element-wise non-linearity such as the sigmoid (Equation (79)).

Figure 6: This figure shows how a simple autoencoder works. The model depicted contains the following layers: is code and two hidden layers are used for encoding and two are used for decoding.

3.5.2 Conventional Autoencoder Architecture

A convolutional neural networks (CNN)-based autoencoder can be divided into two main steps 

Masci et al. (2011) (encoding and decoding).

(54)

where  which is a convolutional filter, with convolution among an input volume defined by  which learns to represent input combining non-linear functions:

(55)

where 

is the bias, and the number of zeros we want to pad the input with is such that: dim(I) = dim(decode(encode(I))). Finally, the encoding convolution is equal to:

(56)

The decoding convolution step produces  feature maps . The reconstructed results  is the result of the convolution between the volume of feature maps  and this convolutional filters volume  Chen et al. (2015); Geng et al. (2015); Masci et al. (2011).

(57)
(58)

where Equation (58) shows the decoding convolution with  dimensions. Input’s dimensions are equal to the output’s dimensions.

3.5.3 Recurrent Autoencoder Architecture

A recurrent neural network (RNN) is a natural generalization of feedforward neural networks to sequences 

Sutskever et al. (2014). Figure 7 illustrate recurrent autoencoder architecture. A standard RNN compute the econding as a sequences of output by iteration:

(59)
(60)

where x is inputs and refers to output (

). A multinomial distribution (1-of-K coding) can be output using a softmax activation function 

Cho et al. (2014):

(61)

By combining these probabilities, we can compute the probability of the sequence  as:

(62)
Figure 7: A recurrent autoencoder structure.

3.6 T-distributed Stochastic Neighbor Embedding (t-SNE)

T-SNE is a nonlinear dimensionality reduction method for embedding high-dimensional data. This method is mostly commonly used for visualization in a low-dimensional feature space Maaten and Hinton (2008), as shown in Figure 8. This approach is based on G. Hinton and S. T. Roweis Hinton and Roweis (2003). SNE works by converting the high dimensional Euclidean distances into conditional probabilities which represent similarities Maaten and Hinton (2008). The conditional probability  is calculated by:

(63)

where is the variance of the centered on data point . The similarity of to  is calculated as follows:

(64)

The cost function  is as follows:

(65)

where

is the Kullback–Leibler divergence 

Joyce (2011), which is calculated as:

(66)

The gradient update with a momentum term is as follows:

(67)

where  is the learning rate,  refers to the solution at iteration t, and  indicates momentum at iteration t. Now we can re-write symmetric SNE in the high-dimensional space and a joint probability distribution, , in the low-dimensional space as follows Maaten and Hinton (2008):

(68)

in the high-dimensional space  is:

(69)

The gradient of symmetric S is as follows:

(70)
Figure 8: This figure presents the t-distributed stochastic neighbor embedding (t-SNE) visualization of Word2vec of the Federal Railroad Administration (FRA) data set.

4 Existing Classification Techniques

In this section, we outline existing text and document classification algorithms. First, we describe the Rocchio algorithm which is used for text classification. Then, we address two popular techniques in ensemble learning algorithms: Boosting and bagging. Some methods, such as logistic regression, Naïve Bayes, and k-nearest neighbor, are more traditional but still commonly used in the scientific community. Support vector machines (SVMs), especially kernel SVMs, are also broadly used as a classification technique. Tree-based classification algorithms, such as decision tree and random forests are fast and accurate for document categorization. We also describe neural network based algorithms such as deep neural networks (DNN), CNN, RNN, deep belief network (DBN), hierarchical attention networks (HAN), and combination techniques.

4.1 Rocchio Classification

The Rocchio algorithm was first introduced by J.J. Rocchio Rocchio (1971) in 1971 as method of using relevance feedback to query full-text databases. Since then, many researchers have addressed and developed this technique for text and document classification Partalas et al. (2015); Sowmya et al. (2016). This classification algorithm uses TF-IDF weights for each informative word instead of boolean features. Using a training set of documents, the Rocchio algorithm builds a prototype vector for each class. This prototype is an average vector over the training documents’ vectors that belong to a certain class. It then assigns each test document to the class with the maximum similarity between the test document and each of the prototype vectors Korde and Mahender (2012). The average vector computes the centroid of a class c (center of mass of its members):

(71)

where is the set of documents in D that belongs to class c and is the weighted vector representation of document d. The predicted label of document d is the one with the smallest Euclidean distance between the document and the centroid:

(72)

Centroids can be normalized to unit-length as follows:

(73)

Therefore, the label of test documents can be obtained as follows:

(74)

Limitation of Rocchio Algorithm

The Rocchio algorithm for text classifcation contains many limitations such as the fact that the user can only retrieve a few relevant documents using this model Selvi et al. (2017). Furthermore, this algorithms’ results illustrate by taking semantics into consideration Albitar et al. (2012).

4.2 Boosting and Bagging

Voting classification techniques, such as bagging and boosting, have been successfully developed for document and text data set classification Farzi and Bolandi (2016). While boosting adaptively changes the distribution of the training set based on the performance of previous classifiers, bagging does not look at the previous classifier Bauer and Kohavi (1999).

4.2.1 Boosting

The boosting algorithm was first introduced by R.E. Schapire Schapire (1990) in 1990 as a technique for boosting the performance of a weak learning algorithm. This technique was further developed by Freund Freund (1992); Bloehdorn and Hotho (2004).

Figure 9 shows how a boosting algorithm works for 2D data sets, as shown we have labeled the data, then trained by multi-model architectures (ensemble learning). These developments resulted in the AdaBoost (Adaptive Boosting) Freund et al. (1995). Suppose we construct  such that given  and :

(75)

where refers to the normalization factor and  is as follows:

(76)
Figure 9: This figure is the boosting technique architecture.

As shown in Algorithm 1, training set S of size , inducer  and integer as input. Then this algorithm find the weights of each , and finally, the output is the optimal classifier ().

input :

training set S of size , inducer , integer

for  to  do

       
if  then
              set S’ to a bootstrap sample from S with weight 1 for         all instance and go top     
   endif
       for   do
    if  then
                 
      endif
    
   endfor
      Normalize weights of instances
endfor
output :   Classifier
Algorithm 1 The AdaBoost method

The final classifier formulation can be written as:

(77)

4.2.2 Bagging

The bagging algorithm was introduced by L. Breiman Breiman (1996) in 1996 as a voting classifier method. The algorithm is generated by different bootstrap samples Bauer and Kohavi (1999). A bootstrap generates a uniform sample from the training set. If  bootstrap samples have been generated, then we have classifiers () which is built from each bootstrap sample . Finally, our classifier  contain or generated from whose output is the class predicted most often by its sub-classifiers, with ties broken arbitrarily Bauer and Kohavi (1999); Breiman (1996). Figure 10 shows a simple bagging algorithm which trained  models. As shown in Algorithm 2, We have training set S which is trained and find the best classifier C.

Figure 10: This figure shows a simple model of the bagging technique.

input :

training set S, inducer , integer

for  to  do

        = bootstrap sample from S      =
endfor
output :   Classifier
Algorithm 2 Bagging

4.2.3 Limitation of Boosting and Bagging

Boosting and bagging methods also have many limitations and disadvantages, such as the computational complexity and loss of interpretability Geurts (2000), which means that the feature importance could not be discovered by these models.

4.3 Logistic Regression

One of the earliest methods of classification is logistic regression (LR). LR was introduced and developed by statistician David Cox in 1958 Cox (2018). LR is a linear classifier with decision boundary of . LR predicts probabilities rather than classes Fan et al. (2008); Genkin et al. (2007).

4.3.1 Basic Framework

The goal of LR is to train from the probability of variable  being 0 or 1 given . Let us have text data which is . If we have binary classification problems, the Bernoulli mixture models function should be used Juan and Vidal (2002) as follows:

(78)

where , and

is a sigmoid function which is defined as shown in Equation (

79).

(79)

4.3.2 Combining Instance-Based Learning and LR

The LR model specifies the probability of binary output given the input

. we can consider posterior probability as:

(80)

where:

(81)

where  is the likelihood ratio it could be re-written as:

(82)
(83)

with respect to:

(84)

To obey the basic principle underlying instance-based learning (IBL) Cheng and Hüllermeier (2009), the classifier should be a function of the distance . will be large if  then , and small for . should be close to  if ; then, neither in favor of  nor in favor of , so the parameterized function is as follows: