DeepAI AI Chat
Log In Sign Up

A multi-model-based deep learning framework for short text multiclass classification with the imbalanced and extremely small data set

by   Jiajun Tong, et al.

Text classification plays an important role in many practical applications. In the real world, there are extremely small datasets. Most existing methods adopt pre-trained neural network models to handle this kind of dataset. However, these methods are either difficult to deploy on mobile devices because of their large output size or cannot fully extract the deep semantic information between phrases and clauses. This paper proposes a multimodel-based deep learning framework for short-text multiclass classification with an imbalanced and extremely small data set. Our framework mainly includes five layers: The encoder layer uses DISTILBERT to obtain context-sensitive dynamic word vectors that are difficult to represent in traditional feature engineering methods. Since the transformer part of this layer is distilled, our framework is compressed. Then, we use the next two layers to extract deep semantic information. The output of the encoder layer is sent to a bidirectional LSTM network, and the feature matrix is extracted hierarchically through the LSTM at the word and sentence level to obtain the fine-grained semantic representation. After that, the max-pooling layer converts the feature matrix into a lower-dimensional matrix, preserving only the obvious features. Finally, the feature matrix is taken as the input of a fully connected softmax layer, which contains a function that can convert the predicted linear vector into the output value as the probability of the text in each classification. Extensive experiments on two public benchmarks demonstrate the effectiveness of our proposed approach on an extremely small data set. It retains the state-of-the-art baseline performance in terms of precision, recall, accuracy, and F1 score, and through the model size, training time, and convergence epoch, we can conclude that our method can be deployed faster and lighter on mobile devices.


W-RNN: News text classification based on a Weighted RNN

Most of the information is stored as text, so text mining is regarded as...

Research on Dual Channel News Headline Classification Based on ERNIE Pre-training Model

The classification of news headlines is an important direction in the fi...

Hierarchical Text Classification of Urdu News using Deep Neural Network

Digital text is increasing day by day on the internet. It is very challe...

Transformer-F: A Transformer network with effective methods for learning universal sentence representation

The Transformer model is widely used in natural language processing for ...

Text classification with pixel embedding

We propose a novel framework to understand the text by converting senten...

Text Classification Improved by Integrating Bidirectional LSTM with Two-dimensional Max Pooling

Recurrent Neural Network (RNN) is one of the most popular architectures ...

Recognition and Processing of NATOM

In this paper we show how to process the NOTAM (Notice to Airmen) data o...