Author Identification using Multi-headed Recurrent Neural Networks

06/16/2015
by   Douglas Bagnall, et al.
0

Recurrent neural networks (RNNs) are very good at modelling the flow of text, but typically need to be trained on a far larger corpus than is available for the PAN 2015 Author Identification task. This paper describes a novel approach where the output layer of a character-level RNN language model is split into several independent predictive sub-models, each representing an author, while the recurrent layer is shared by all. This allows the recurrent layer to model the language as a whole without over-fitting, while the outputs select aspects of the underlying model that reflect their author's style. The method proves competitive, ranking first in two of the four languages.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/03/2017

Including Dialects and Language Varieties in Author Profiling

This paper presents a computational approach to author profiling taking ...
research
08/29/2017

Gradual Learning of Deep Recurrent Neural Networks

Deep Recurrent Neural Networks (RNNs) achieve state-of-the-art results i...
research
10/16/2016

Translation Quality Estimation using Recurrent Neural Network

This paper describes our submission to the shared task on word/phrase le...
research
04/16/2019

UTFPR at SemEval-2019 Task 5: Hate Speech Identification with Recurrent Neural Networks

In this paper we revisit the problem of automatically identifying hate s...
research
04/25/2023

KINLP at SemEval-2023 Task 12: Kinyarwanda Tweet Sentiment Analysis

This paper describes the system entered by the author to the SemEval-202...
research
11/11/2019

RNN-Test: Adversarial Testing Framework for Recurrent Neural Network Systems

While huge efforts have been investigated in the adversarial testing of ...

Please sign up or login with your details

Forgot password? Click here to reset