We present a cross-linguistic study that aims to quantify vowel harmony ...
Large pre-trained language models (PLMs) have shown remarkable performan...
Self-supervised representation learning for speech often involves a
quan...
Weakly supervised learning is a popular approach for training machine
le...
Few-shot fine-tuning and in-context learning are two alternative strateg...
In this paper, we present MasakhaPOS, the largest part-of-speech (POS)
d...
This paper investigates the performance of massively multilingual neural...
Acoustic word embeddings (AWEs) are vector representations such that
dif...
In noisy environments, speech can be hard to understand for humans. Spok...
Models of acoustic word embeddings (AWEs) learn to map variable-length s...
Although masked language models are highly performant and widely adopted...
Transferring knowledge from one domain to another is of practical import...
For high-resource languages like English, text classification is a
well-...
Analyzing ethnic or religious bias is important for improving fairness,
...
The detection and normalization of temporal expressions is an important ...
Recent research on style transfer takes inspiration from unsupervised ne...
Training deep neural networks (DNNs) with weak supervision has been a ho...
Even though hate speech (HS) online has been an important object of rese...
Learning semantically meaningful sentence embeddings is an open problem ...
Incorrect labels in training data occur when human annotators make mista...
Multilingual pre-trained language models (PLMs) have demonstrated impres...
Air traffic control (ATC) relies on communication via speech between pil...
Recently neural network based approaches to knowledge-intensive NLP task...
Many NLP models gain performance by having access to a knowledge base. A...
The field of natural language processing (NLP) has recently seen a large...
Even though most interfaces in the real world are discrete, no efficient...
State-of-the-art deep learning methods achieve human-like performance on...
How do neural networks "perceive" speech sounds from unknown languages? ...
Documents as short as a single sentence may inadvertently reveal sensiti...
For most language combinations, parallel data is either scarce or simply...
Listening in noisy environments can be difficult even for individuals wi...
Welcome to WeaSuL 2021, the First Workshop on Weakly Supervised Learning...
Several variants of deep neural networks have been successfully employed...
Hate speech and profanity detection suffer from data sparsity, especiall...
In low-resource settings, model transfer can help to overcome a lack of
...
Distant supervision allows obtaining labeled training corpora for
low-re...
Sentiment tasks such as hate speech detection and sentiment analysis,
es...
Distant and weak supervision allow to obtain large amounts of labeled
tr...
Transformer-based language models achieve high performance on various ta...
Visual captioning aims to generate textual descriptions given images.
Tr...
Current developments in natural language processing offer challenges and...
Certain embedding types outperform others in different scenarios, e.g.,
...
Deep neural networks have been employed for various spoken language
reco...
Multilingual transformer models like mBERT and XLM-RoBERTa have obtained...
Fine-tuning pre-trained contextualized embedding models has become an
in...
A lot of real-world phenomena are complex and cannot be captured by sing...
Machine Learning approaches to Natural Language Processing tasks benefit...
State-of-the-art spoken language identification (LID) systems, which are...
Given an image, generating its natural language description (i.e., capti...
Generating longer textual sequences when conditioned on the visual
infor...