Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling

01/04/2021
by   Sofia Oliveira, et al.
0

Semantic Role Labeling (SRL) is a core Natural Language Processing task. For English, recent methods based on Transformer models have allowed for major improvements over the previous state of the art. However, for low resource languages, and in particular for Portuguese, currently available SRL models are hindered by scarce training data. In this paper, we explore a model architecture with only a pre-trained BERT-based model, a linear layer, softmax and Viterbi decoding. We substantially improve the state of the art performance in Portuguese by over 15F_1. Additionally, we improve SRL results in Portuguese corpora by exploiting cross-lingual transfer learning using multilingual pre-trained models (XLM-R), and transfer learning from dependency parsing in Portuguese. We evaluate the various proposed approaches empirically and as result we present an heuristic that supports the choice of the most appropriate model considering the available resources.

READ FULL TEXT
research
09/14/2022

Language Chameleon: Transformation analysis between languages using Cross-lingual Post-training based on Pre-trained language models

As pre-trained language models become more resource-demanding, the inequ...
research
02/18/2020

From English To Foreign Languages: Transferring Pre-trained Language Models

Pre-trained models have demonstrated their effectiveness in many downstr...
research
07/19/2022

On the Usability of Transformers-based models for a French Question-Answering task

For many tasks, state-of-the-art results have been achieved with Transfo...
research
05/01/2020

Structured Tuning for Semantic Role Labeling

Recent neural network-driven semantic role labeling (SRL) systems have s...
research
05/17/2020

Building a Hebrew Semantic Role Labeling Lexical Resource from Parallel Movie Subtitles

We present a semantic role labeling resource for Hebrew built semi-autom...
research
03/20/2022

simCrossTrans: A Simple Cross-Modality Transfer Learning for Object Detection with ConvNets or Vision Transformers

Transfer learning is widely used in computer vision (CV), natural langua...

Please sign up or login with your details

Forgot password? Click here to reset