
-
Retrieve Fast, Rerank Smart: Cooperative and Joint Approaches for Improved Cross-Modal Retrieval
Current state-of-the-art approaches to cross-modal retrieval process tex...
read it
-
How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models
In this work we provide a systematic empirical comparison of pretrained ...
read it
-
UNKs Everywhere: Adapting Multilingual Language Models to New Scripts
Massively multilingual language models such as multilingual BERT (mBERT)...
read it
-
AdapterDrop: On the Efficiency of Adapters in Transformers
Massively pre-trained transformer models are computationally expensive t...
read it
-
MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on a Massive Scale
We study the zero-shot transfer capabilities of text matching models on ...
read it
-
AdapterHub: A Framework for Adapting Transformers
The current modus operandi in NLP involves downloading and fine-tuning p...
read it
-
Low Resource Multi-Task Sequence Tagging – Revisiting Dynamic Conditional Random Fields
We compare different models for low resource multi-task sequence tagging...
read it
-
AdapterFusion: Non-Destructive Task Composition for Transfer Learning
Current approaches to solving classification tasks in NLP involve fine-t...
read it
-
MAD-X: An Adapter-based Framework for Multi-task Cross-lingual Transfer
The main goal behind state-of-the-art pretrained multilingual models suc...
read it
-
Fine-Tuned Neural Models for Propaganda Detection at the Sentence and Fragment levels
This paper presents the CUNLP submission for the NLP4IF 2019 shared-task...
read it
-
What do Deep Networks Like to Read?
Recent research towards understanding neural networks probes models in a...
read it
-
FAMULUS: Interactive Annotation and Feedback Generation for Teaching Diagnostic Reasoning
Our proposed system FAMULUS helps students learn to diagnose based on au...
read it