An extensive library of symptom inventories has been developed over time...
To foster the development of new models for collaborative AI-assisted re...
Zero-shot cross-lingual transfer is when a multilingual model is trained...
Multilingual machine translation has proven immensely useful for low-res...
Incorporating language-specific (LS) modules is a proven method to boost...
Mixture-of-experts (MoE) models that employ sparse activation have
demon...
In this work, we focus on intrasentential code-mixing and propose severa...
Recent model pruning methods have demonstrated the ability to remove
red...
The current state-of-the-art for few-shot cross-lingual transfer learnin...
The advent of transformer-based models such as BERT has led to the rise ...
Zero-shot cross-lingual information extraction (IE) describes the
constr...
The success of bidirectional encoders using masked language models, such...
In this paper, we investigate the driving factors behind concatenation, ...
While numerous attempts have been made to jointly parse syntax and seman...
Fine-tuning is known to improve NLP models by adapting an initial model
...
This paper describes the Notre Dame Natural Language Processing Group's
...
Neural sequence-to-sequence models, particularly the Transformer, are th...
Machine translation systems based on deep neural networks are expensive ...
We study two problems in neural machine translation (NMT). First, in bea...
Early detection and precise characterization of emerging topics in text
...
Neural networks have been shown to improve performance across a range of...