-
Character-Level Models versus Morphology in Semantic Role Labeling
Character-level models have become a popular approach specially for thei...
read it
-
Input Combination Strategies for Multi-Source Transformer Decoder
In multi-source sequence-to-sequence tasks, the attention mechanism can ...
read it
-
Unlabeled Data for Morphological Generation With Character-Based Sequence-to-Sequence Models
We present a semi-supervised way of training a character-based encoder-d...
read it
-
Copy mechanism and tailored training for character-based data-to-text generation
In the last few years, many different methods have been focusing on usin...
read it
-
Character-Aware Decoder for Neural Machine Translation
Standard neural machine translation (NMT) systems operate primarily on w...
read it
-
Acquisition of Inflectional Morphology in Artificial Neural Networks With Prior Knowledge
How does knowledge of one language's morphology influence learning of in...
read it
-
The Evolution of User-Selected Passwords: A Quantitative Analysis of Publicly Available Datasets
The aim of this work is to study the evolution of password selection amo...
read it
The Role of Interpretable Patterns in Deep Learning for Morphology
We examine the role of character patterns in three tasks: morphological analysis, lemmatization and copy. We use a modified version of the standard sequence-to-sequence model, where the encoder is a pattern matching network. Each pattern scores all possible N character long subwords (substrings) on the source side, and the highest scoring subword's score is used to initialize the decoder as well as the input to the attention mechanism. This method allows learning which subwords of the input are important for generating the output. By training the models on the same source but different target, we can compare what subwords are important for different tasks and how they relate to each other. We define a similarity metric, a generalized form of the Jaccard similarity, and assign a similarity score to each pair of the three tasks that work on the same source but may differ in target. We examine how these three tasks are related to each other in 12 languages. Our code is publicly available.
READ FULL TEXT
Comments
There are no comments yet.