Pre-train, Interact, Fine-tune: A Novel Interaction Representation for Text Classification

09/26/2019
by   Jianming Zheng, et al.
0

Text representation can aid machines in understanding text. Previous work on text representation often focuses on the so-called forward implication, i.e., preceding words are taken as the context of later words for creating representations, thus ignoring the fact that the semantics of a text segment is a product of the mutual implication of words in the text: later words contribute to the meaning of preceding words. We introduce the concept of interaction and propose a two-perspective interaction representation, that encapsulates a local and a global interaction representation. Here, a local interaction representation is one that interacts among words with parent-children relationships on the syntactic trees and a global interaction interpretation is one that interacts among all the words in a sentence. We combine the two interaction representations to develop a Hybrid Interaction Representation (HIR). Inspired by existing feature-based and fine-tuning-based pretrain-finetuning approaches to language models, we integrate the advantages of feature-based and fine-tuning-based methods to propose the Pre-train, Interact, Fine-tune (PIF) architecture. We evaluate our proposed models on five widely-used datasets for text classification tasks. Our ensemble method, outperforms state-of-the-art baselines with improvements ranging from 2.03 In addition, we find that, the improvements of PIF against most state-of-the-art methods is not affected by increasing of the length of the text.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/14/2019

How to Fine-Tune BERT for Text Classification?

Language model pre-training has proven to be useful in learning universa...
research
02/23/2022

Prompt-Learning for Short Text Classification

In the short text, the extreme short length, feature sparsity and high a...
research
04/06/2020

A Hierarchical Fine-Tuning Approach Based on Joint Embedding of Words and Parent Categories for Hierarchical Multi-label Text Classification

Many important classification problems in real world consist of a large ...
research
08/07/2019

A Simple and Effective Approach for Fine Tuning Pre-trained Word Embeddings for Improved Text Classification

This work presents a new and simple approach for fine-tuning pretrained ...
research
06/18/2023

Evolutionary Verbalizer Search for Prompt-based Few Shot Text Classification

Recent advances for few-shot text classification aim to wrap textual inp...
research
11/07/2015

The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations

We introduce a new test of how well language models capture meaning in c...
research
02/29/2020

Depth-Adaptive Graph Recurrent Network for Text Classification

The Sentence-State LSTM (S-LSTM) is a powerful and high efficient graph ...

Please sign up or login with your details

Forgot password? Click here to reset