Discriminative and Generative Transformer-based Models For Situation Entity Classification

09/15/2021
by   Mehdi Rezaee, et al.
13

We re-examine the situation entity (SE) classification task with varying amounts of available training data. We exploit a Transformer-based variational autoencoder to encode sentences into a lower dimensional latent space, which is used to generate the text and learn a SE classifier. Test set and cross-genre evaluations show that when training data is plentiful, the proposed model can improve over the previous discriminative state-of-the-art models. Our approach performs disproportionately better with smaller amounts of training data, but when faced with extremely small sets (4 instances per label), generative RNN methods outperform transformers. Our work provides guidance for future efforts on SE and semantic prediction tasks, and low-label training regimes.

READ FULL TEXT

page 9

page 10

research
02/26/2021

Iterative SE(3)-Transformers

When manipulating three-dimensional data, it is possible to ensure that ...
research
12/20/2018

Variational Cross-domain Natural Language Generation for Spoken Dialogue Systems

Cross-domain natural language generation (NLG) is still a difficult task...
research
09/07/2022

AILAB-Udine@SMM4H 22: Limits of Transformers and BERT Ensembles

This paper describes the models developed by the AILAB-Udine team for th...
research
06/18/2020

SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks

We introduce the SE(3)-Transformer, a variant of the self-attention modu...
research
03/16/2019

Imbalanced multi-label classification using multi-task learning with extractive summarization

Extractive summarization and imbalanced multi-label classification often...
research
03/02/2022

ePA*SE: Edge-based Parallel A* for Slow Evaluations

Parallel search algorithms harness the multithreading capability of mode...

Please sign up or login with your details

Forgot password? Click here to reset