Adversarial Alignment of Multilingual Models for Extracting Temporal Expressions from Text

05/19/2020
by   Lukas Lange, et al.
0

Although temporal tagging is still dominated by rule-based systems, there have been recent attempts at neural temporal taggers. However, all of them focus on monolingual settings. In this paper, we explore multilingual methods for the extraction of temporal expressions from text and investigate adversarial training for aligning embedding spaces to one common space. With this, we create a single multilingual model that can also be transferred to unseen languages and set the new state of the art in those cross-lingual transfer experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/2022

Multilingual Normalization of Temporal Expressions with Masked Language Models

The detection and normalization of temporal expressions is an important ...
research
03/17/2021

Code-Mixing on Sesame Street: Dawn of the Adversarial Polyglots

Multilingual models have demonstrated impressive cross-lingual transfer ...
research
04/18/2018

Experiments with Universal CEFR Classification

The Common European Framework of Reference (CEFR) guidelines describe la...
research
08/24/2021

Are the Multilingual Models Better? Improving Czech Sentiment with Transformers

In this paper, we aim at improving Czech sentiment with transformer-base...
research
04/18/2021

mT6: Multilingual Pretrained Text-to-Text Transformer with Translation Pairs

Multilingual T5 (mT5) pretrains a sequence-to-sequence model on massive ...
research
09/10/2021

A Simple and Effective Method To Eliminate the Self Language Bias in Multilingual Representations

Language agnostic and semantic-language information isolation is an emer...
research
09/30/2021

BERT got a Date: Introducing Transformers to Temporal Tagging

Temporal expressions in text play a significant role in language underst...

Please sign up or login with your details

Forgot password? Click here to reset