Multi-Task Attentive Residual Networks for Argument Mining

02/24/2021 ∙ by Andrea Galassi, et al. ∙ 0

We explore the use of residual networks and neural attention for argument mining and in particular link prediction. The method we propose makes no assumptions on document or argument structure. We propose a residual architecture that exploits attention, multi-task learning, and makes use of ensemble. We evaluate it on a challenging data set consisting of user-generated comments, as well as on two other datasets consisting of scientific publications. On the user-generated content dataset, our model outperforms state-of-the-art methods that rely on domain knowledge. On the scientific literature datasets it achieves results comparable to those yielded by BERT-based approaches but with a much smaller model size.



There are no comments yet.


page 12

Code Repositories


Code for doing Argument Structure Prediction using Residual Networks and (almost) without symbolic features

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.