In this paper, we take the advantage of previous pre-trained models (PTM...
Both performance and efficiency are crucial factors for sequence labelin...
Adversarial attacks in texts are mostly substitution-based methods that
...
With the emerging branch of incorporating factual knowledge into pre-tra...
Recently, the emergence of pre-trained models (PTMs) has brought natural...
Most existing deep multi-task learning models are based on parameter sha...
Although the fully-connected attention-based model Transformer has achie...