A Chinese Text Classification Method With Low Hardware Requirement Based on Improved Model Concatenation

10/28/2020
by   Yuanhao Zhuo, et al.
0

In order to improve the accuracy performance of Chinese text classification models with low hardware requirements, an improved concatenation-based model is designed in this paper, which is a concatenation of 5 different sub-models, including TextCNN, LSTM, and Bi-LSTM. Compared with the existing ensemble learning method, for a text classification mission, this model's accuracy is 2 higher. Meanwhile, the hardware requirements of this model are much lower than the BERT-based model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2017

A WL-SPPIM Semantic Model for Document Classification

In this paper, we explore SPPIM-based text classification method, and th...
research
10/25/2017

Re-evaluating the need for Modelling Term-Dependence in Text Classification Problems

A substantial amount of research has been carried out in developing mach...
research
04/21/2023

Downstream Task-Oriented Neural Tokenizer Optimization with Vocabulary Restriction as Post Processing

This paper proposes a method to optimize tokenization for the performanc...
research
06/06/2021

Identifying Populist Paragraphs in Text: A machine-learning approach

Abstract: In this paper we present an approach to develop a text-classif...
research
11/14/2021

"Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification

Feature attribution a.k.a. input salience methods which assign an import...
research
04/09/2021

BERT-based Chinese Text Classification for Emergency Domain with a Novel Loss Function

This paper proposes an automatic Chinese text categorization method for ...
research
09/06/2019

Understanding the Impact of Text Highlighting in Crowdsourcing Tasks

Text classification is one of the most common goals of machine learning ...

Please sign up or login with your details

Forgot password? Click here to reset